US11768920B2 - Apparatus and method for performing heterogeneous sensor fusion - Google Patents
Apparatus and method for performing heterogeneous sensor fusion Download PDFInfo
- Publication number
- US11768920B2 US11768920B2 US17/094,499 US202017094499A US11768920B2 US 11768920 B2 US11768920 B2 US 11768920B2 US 202017094499 A US202017094499 A US 202017094499A US 11768920 B2 US11768920 B2 US 11768920B2
- Authority
- US
- United States
- Prior art keywords
- box
- sensor
- matching rate
- image
- detection points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 54
- 238000000034 method Methods 0.000 title claims description 26
- 238000001514 detection method Methods 0.000 claims abstract description 124
- 239000011159 matrix material Substances 0.000 claims 4
- 238000010586 diagram Methods 0.000 description 14
- 238000007500 overflow downdraw method Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/12—Bounding box
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- the present disclosure relates to a heterogeneous sensor fusion apparatus, and more particularly to an apparatus and method for performing heterogeneous sensor fusion.
- ADAS advanced driver assistance system
- a track output of various sensors for sensor fusion includes various pieces of information, and in this regard, various new methods for improving the accuracy of output information, such as the position, speed, class, or the like of a track, have been studied and developed.
- a Lidar (Radar) sensor has high accuracy for information on the position of a track but has low accuracy for class information of the track due to the characteristics of the sensor.
- an image sensor has characteristics of relatively low accuracy for information on the position of a track but has high accuracy for class information of the track.
- the present disclosure is directed to an apparatus and method for performing heterogeneous sensor fusion for improving the performance for fusion of heterogeneous sensors to improve the accuracy of track information by matching a detection point of a first sensor with an object in an image of a second sensor to calculate a matching rate and fusing information from the first and second sensors based on the matching rate.
- a heterogeneous sensor fusion apparatus includes a point processor configured to detect an object by processing a detection point input from a first sensor, an image processor configured to detect an object by processing an image input from a second sensor, a point-matching unit configured to calculate a matching rate by matching the detection point with the object in the image, and to check whether the object in the image and the object corresponding to the detection point are the same object based on the calculated matching rate, an association unit configured to generate track information by fusing information from the first and second sensors when the object in the image and the object corresponding to the detection point are the same object, and an output unit configured to output the generated track information.
- a computer-readable recording medium having recorded thereon a program for executing the heterogeneous sensor fusion method of the heterogeneous sensor fusion apparatus performs procedures provided by the heterogeneous sensor fusion method of the heterogeneous sensor fusion apparatus.
- a vehicle in another aspect of the present disclosure, includes a first sensor configured to acquire a detection point corresponding to an object around the vehicle, a second sensor configured to acquire an image of a region around the vehicle, and a heterogeneous sensor fusion apparatus configured to fuse information from the first and second sensors, wherein the heterogeneous sensor fusion apparatus detects an object by processing a detection point input from the first sensor, detects an object by processing an image input from the second sensor, calculates a matching rate by matching the detection point with the object in the image, checks whether the object in the image and the object corresponding to the detection point are the same object based on the calculated matching rate, generates track information by fusing the information from the first and second sensors when the object in the image and the object corresponding to the detection point are the same object, and outputs the generated track information.
- FIG. 1 is a diagram for explaining a vehicle including a heterogeneous sensor fusion apparatus in one form of the present disclosure
- FIG. 2 is a block diagram for explaining a heterogeneous sensor fusion apparatus in one form of the present disclosure
- FIG. 3 is a diagram for explaining a procedure of changing coordinates of a detection point to reference coordinates of an image layer
- FIG. 4 is a diagram for explaining a procedure of matching a detection point with an object in an image
- FIGS. 5 A and 5 B are diagrams for explaining a procedure of calculating a matching rate between information on an object in an image and information on an object detected using Lidar;
- FIG. 6 is a diagram showing matching between first and second object information of an image and first and second object information of Lidar;
- FIGS. 7 A and 7 B are diagrams for explaining a procedure of calculating a matching rate between first object information of an image and first and second object information of Lidar.
- FIG. 8 is a flowchart for explaining a heterogeneous sensor fusion method in one form of the present disclosure.
- FIG. 1 is a diagram for explaining a vehicle including a heterogeneous sensor fusion apparatus according to an embodiment of the present disclosure.
- a vehicle 1 may include a first sensor 100 for acquiring a detection point corresponding to an object around the vehicle, a second sensor 200 for acquiring an image of a region around the vehicle, and a heterogeneous sensor fusion apparatus 300 for fusing information from the first and second sensors 100 and 200 .
- first and second sensors 100 and 200 may be different heterogeneous sensors.
- the first sensor 100 may include at least one of Lidar or Radar
- the second sensor 200 may include a camera, but the present disclosure is not limited thereto.
- the heterogeneous sensor fusion apparatus 300 may detect an object by processing a detection point input from the first sensor 100 , may detect an object by processing an image input from the second sensor 200 , may calculate a matching rate by matching the detection point with an object in the image, may check whether the object in the image and the object corresponding to the detection point are the same object based on the calculated matching rate, may generate track information by fusing information from the first and second sensors 100 and 200 when the object in the image and the object corresponding to the detection point are the same object, and may output the generated the track information.
- the heterogeneous sensor fusion apparatus 300 may detect and classify an object based on the detection point and may store coordinate information of the detection point included in the detected object.
- the heterogeneous sensor fusion apparatus 300 may convert coordinate information of the detection point included in the object into a world coordinate system of an image by projecting the coordinate information of the detection point included in the object onto an image layer.
- the heterogeneous sensor fusion apparatus 300 may detect and classify an object from the image and may store coordinate information of an image pixel included in the detected object.
- the heterogeneous sensor fusion apparatus 300 may generate a first box corresponding to the object at the detection point projected onto the image layer, may generate a second box corresponding to the object in the image, and may calculate the matching rate by comparing the area of any one of the first and second boxes with the area of an overlapping region between the first and second boxes.
- the heterogeneous sensor fusion apparatus 300 may generate the first box corresponding to the object at the detection point projected onto the image layer, may generate the second box corresponding to the object in the image, and may calculate the matching rate based on the number of detection points included in the second box.
- the heterogeneous sensor fusion apparatus 300 may check whether the calculated matching rate is equal to or greater than a reference ratio, and may recognize that the object in the image and the object corresponding to the detection point are the same object when the matching rate is equal to or greater than the reference ratio.
- the heterogeneous sensor fusion apparatus 300 may recognize that the object in the image and the object corresponding to the detection point are different objects when the matching rate is less than the reference ratio.
- the heterogeneous sensor fusion apparatus 300 may fuse information from the first and second sensors to generate track information and may output the generated track information.
- the heterogeneous sensor fusion apparatus 300 may separately output information on the first and second sensors without fusing the same.
- the coordinates of a point cloud may be changed to reference coordinates of the image layer, and the point cloud having the changed coordinates may be compared with a pixel of the image, thereby improving the performance for determining whether detected objects are the same object.
- the present disclosure may simply convert data and may improve the performance of sensor fusion using a fusion application method between heterogeneous sensors having raw data with similar characteristics.
- the present disclosure may improve the performance of sensor fusion without an additional increase in material costs by implementing logic in software.
- FIG. 2 is a block diagram for explaining a heterogeneous sensor fusion apparatus according to an embodiment of the present disclosure.
- the heterogeneous sensor fusion apparatus 300 may include a point processor 310 , an image processor 320 , a point-matching unit 330 , an association unit 340 , and an output unit 350 .
- the point processor 310 may process a detection point input from the first sensor to detect an object.
- the point processor 310 may receive the detection point from the first sensor including at least one of Lidar or Radar, but the present disclosure is not limited thereto.
- the point processor 310 may detect and classify an object based on the detection point, processed using machine and deep learning, or the like.
- the point processor 310 may store coordinate information of the detection point included in the detected object.
- the point processor 310 may project the coordinate information of the detection point included in the object onto the image layer and may convert the coordinate information of the detection point included in the object into a world coordinate system of an image.
- coordinates of a detection point included in object information of a first sensor including Lidar and Radar may be projected onto the image layer, and the coordinate of the detection point may be converted into a world coordinate system of an image through a TransMatrix that is obtained via calibration between Lidar as the first sensor and a camera as a second sensor or between the Radar as the first sensor and the camera as the second sensor.
- the image processor 320 may detect an object by processing an image input from the second sensor.
- the image processor 320 may receive an image from the second sensor including a camera, but the present disclosure is not limited thereto.
- the image processor 320 may detect and classify the object based on the image processed using deep learning or the like.
- the image processor 320 may store coordinate information of an image pixel included in the detected object.
- the point-matching unit 330 may calculate a matching rate by matching the detection point with the object in the image, may check whether the object in the image and the object corresponding to the detection point are the same object based on the calculated matching rate.
- the point-matching unit 330 may generate a first box corresponding to the object at the detection point projected onto the image layer, may generate a second box corresponding to the object in the image, and may calculate the matching rate by comparing the area of any one of the first and second boxes with the area of an overlapping region between the first and second boxes.
- the point-matching unit 330 may generate the first box corresponding to the object at the detection point projected onto the image layer, may generate the second box corresponding to the object in the image, and may calculate the matching rate based on the number of detection points included in the second box.
- the point-matching unit 330 may check whether the calculated matching rate is equal to or greater than a reference ratio, may recognize that the object in the image of the object corresponding to the detection point are the same object when the matching rate is equal to or greater than the reference ratio, and may transmit information from the first and second sensors to the association unit 340 so as to fuse the information from the first and second sensors.
- the reference ratio may be about 80% to about 90%, but the present disclosure is not limited thereto.
- the point-matching unit 330 may recognize that the object in the image and the object corresponding to the detection point are different objects when the matching rate is less than the reference ratio, and may separately transmit information from the first and second sensors to the output unit 350 .
- the association unit 340 may generate track information by fusing information on the first and second sensors.
- the association unit 340 may fuse track information of heterogeneous sensors having a matching rate equal to or greater than a predetermined ratio.
- the association unit 340 may generate the track information by fusing information from each sensor with high accuracy.
- association unit 340 may selectively fuse information with high accuracy in consideration of the characteristics of sensors.
- a weight may be applied to a Lidar (Radar) sensor in the case of position information, and a weight may be applied to an image sensor in the case of class information, but the present disclosure is not limited thereto.
- Lidar Lidar
- an image sensor in the case of class information
- the output unit 350 may output the track information generated by the association unit 340 , and may separately output information from the first and second sensors, when the point-matching unit 330 recognizes that the objects detected by processing the outputs from the first and second sensors are different from each other.
- the output unit 350 may output the track information generated by the association unit 340 or may separately output information from sensors, which is not fused, and which has a matching rate less than a predetermined ratio.
- FIG. 3 is a diagram for explaining a procedure of changing coordinates of a detection point to reference coordinates of an image layer.
- coordinate information of a detection point 530 included in an object may be converted into a world coordinate system of an image by projecting the coordinate information of the detection point 530 included in the object onto an image layer 410 .
- the detection point 530 included in information on the object detected using Lidar or Radar may be projected onto the image layer 410 , and the coordinates of the detection point 530 may be converted into a world coordinate system of an image through a TransMatrix obtained via calibration between Lidar and a camera or between Radar and a camera.
- the converted world coordinate system may calculate coordinates (x, y, z) of a camera using the coordinates (X, Y, Z) of Lidar (Radar) according to the triangle proportionality theorem, as shown in FIG. 3 .
- FIG. 4 is a diagram for explaining a procedure of matching a detection point with an object in an image.
- a matching rate between a point cloud of a Lidar (Radar) track and an object in an image may be calculated by comparing a point cloud projected onto an image layer with information on an object detected from an image.
- the first box 510 corresponding to the object at the detection point 530 projected onto the image layer, may be generated, the second box 520 , corresponding to the object in the image may be generated, and a matching rate may be calculated by comparing the area of the first box 510 with the area of an overlapping region between the first and second boxes 510 and 520 .
- the first box 510 corresponding to the object at the detection point 530 projected onto the image layer may be generated
- the second box 520 corresponding to the object in the image
- the matching rate may be calculated based on the number of detection points 530 included in the second box 520 .
- FIGS. 5 A and 5 B are diagrams for explaining a procedure of calculating a matching rate between information on an object in an image and information on an object detected using Lidar.
- the matching rate between the information on the object in the image and the information on the projected object detected using Lidar may be calculated, and the target with the highest matching rate may be selected.
- the matching rate may be calculated based on the area of the object detected using Lidar (Radar) on to the image layer and the area of the overlapping region between boxes of the object in the image.
- Lidar Lidar
- the first box 510 corresponding to the object at the detection point projected onto the image layer may be generated
- the second box 520 corresponding to the object in the image may be generated
- the matching rate may be calculated by comparing the area of any one of the first and second boxes 510 and 520 with the area of an overlapping region 550 between the first and second boxes 510 and 520 .
- the matching rate may be calculated based on the extent to which the detection point corresponding to an object detected using Lidar (Radar) is included in the box of the object in the image.
- the matching rate may be calculated based on the number of detection points 530 corresponding to the object detected using Lidar (Radar) included in the second box 520 corresponding to the object in the image.
- Lidar Lidar
- FIG. 6 is a diagram showing matching between first and second object information of an image and first and second object information of Lidar.
- FIGS. 7 A and 7 B are diagrams for explaining a procedure of calculating a matching rate between first object information of an image and first and second object information of Lidar.
- a first box 510 - 1 corresponding to a first object detected using Lidar (Radar) and a first box 510 - 2 corresponding to a second object detected using Lidar (Radar) may be generated, and a second box 520 - 1 corresponding to a first object in an image and a second box 520 - 2 corresponding to a second object in the image may be generated.
- a matching rate may be calculated by comparing the area of any one of the first and second boxes 510 and 520 with the area of an overlapping region between the first and second boxes 510 and 520 , and then, a counterpart object with a high matching rate may be selected as a fusion target object.
- the matching rate may be calculated by comparing the area of the second box 520 with the area of the overlapping region 550 between the first and second boxes 510 and 520 .
- a matching rate may be calculated by comparing the area of the overlapping region 550 between the first box 510 - 1 corresponding to the first object and the second box 520 - 1 corresponding to the first object in FIG. 6 .
- a matching rate may be calculated by comparing the area of the overlapping region 550 between the first box 510 - 2 corresponding to the second object and the second box 520 - 1 corresponding to the first object in FIG. 6 .
- a first matching rate between the first object information on the first object in the image and the first object information of the first object detected using Lidar (Radar) is compared with a second matching rate between the first object information of the image and the second object information of the second object detected using Lidar (Radar)
- the first matching rate is greater than the second matching rate
- the information on the first object in the image and the information of the first object detected using Lidar (Radar) may be recognized to be the same object
- the information on the first object in the image and the information on the second object detected using Lidar (Radar) may be recognized to be different objects.
- FIG. 8 is a flowchart for explaining a heterogeneous sensor fusion method according to an embodiment of the present disclosure.
- a detection point may be input from a first sensor (S 10 ) and an image may be received from a second sensor (S 30 ).
- the first sensor 100 may include at least one of Lidar or Radar
- the second sensor 200 may include a camera, but the present disclosure is not limited thereto.
- an object may be detected by processing the detection point input from the first sensor (S 20 ), and coordinate information of a detection point of the object may be converted into a world coordinate system of an image by projecting the coordinate information of the detection point of the object onto an image layer (S 50 ).
- the object when the detection point is received from the first sensor, the object may be detected and classified based on the detection point, and the coordinate information of the detection point of the detected object may be stored.
- the object may be detected by processing an image received from the second sensor (S 40 ).
- an object when an image is received from the second sensor, an object may be detected and classified from the image, and coordinate information of an image pixel included in the detected object may be stored.
- the matching rate may be calculated by matching the detection point with the object in the image (S 60 ).
- a first box corresponding to an object at a detection point projected onto an image layer, may be generated, a second box corresponding to an object in an image may be generated, and a matching rate may be calculated by comparing the area of the first box with the area of the overlapping region between the first and second boxes.
- a first box corresponding to an object at a detection point projected onto an image layer may be generated, a second box corresponding to an object in an image may be generated, and a matching rate may be calculated based on the number of detection points included in the second box.
- the object in the image and the object corresponding to the detection point may be recognized to be the same object, and when the matching rate is less than the reference ratio, the object in the image and the object at the detection point may be recognized to be different objects.
- track information may be generated by fusing information from the first and second sensors (S 80 ).
- the generated track information may be output (S 90 ).
- information from the first and second sensors may be output separately.
- the matching rate may be calculated by matching the detection point of the first sensor with the object in the image of the second sensor, and the information from the first and second sensors may be fused based on the matching rate, and thus the performance of fusion of heterogeneous sensors may be improved, thereby improving the accuracy of track information.
- coordinates of a point cloud may be changed to reference coordinates of the image layer, and the point cloud with the changed coordinate may be compared with a pixel of the image, thereby improving the performance for determining whether detected objects are the same object.
- the present disclosure may convert data in a simple manner and may improve the performance of sensor fusion using a fusion application method between heterogeneous sensors having raw data with similar characteristics.
- the present disclosure may improve the performance of sensor fusion without an additional increase in material costs by implementing logic in software.
- a computer-readable recording medium having recorded thereon a program for executing a heterogeneous sensor fusion method of a heterogeneous sensor fusion apparatus may perform the heterogeneous sensor fusion method of the heterogeneous sensor fusion apparatus according to embodiments of the present disclosure.
- the apparatus and method for performing heterogeneous sensor fusion related to at least one embodiment of the present disclosure as configured above may calculate a matching rate by matching a detection point of a first sensor with an object in an image of a second sensor and may fuse information from the first and second sensors based on the matching rate, and thus may improve the performance for fusion of heterogeneous sensors, thereby improving the accuracy of track information.
- the coordinates of a point cloud may be changed to reference coordinates of the image layer, and the point cloud having the changed coordinates may be compared with a pixel of the image, thereby improving the performance for determining whether detected objects are the same object.
- the present disclosure may simply convert data and may improve the performance of sensor fusion using a fusion application method between heterogeneous sensors having raw data with similar characteristics.
- the present disclosure may improve the performance of sensor fusion without an additional increase in material costs by implementing logic in software.
- the aforementioned present disclosure can also be embodied as computer-readable code stored on a computer-readable recording medium.
- the computer-readable recording medium is any data storage device that can store data which can thereafter be read by a computer. Examples of the computer-readable recording medium include a hard disk drive (HDD), a solid state drive (SSD), a silicon disc drive (SDD), read-only memory (ROM), random-access memory (RAM), CD-ROM, magnetic tapes, floppy disks, optical data storage devices, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Radar, Positioning & Navigation (AREA)
- Electromagnetism (AREA)
- Remote Sensing (AREA)
- Geometry (AREA)
- Computer Networks & Wireless Communication (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
(here, d is the specifications of an image sensor)
Claims (20)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0089670 | 2020-07-20 | ||
KR1020200089670A KR20220010929A (en) | 2020-07-20 | 2020-07-20 | Apparatus and method for performing heterogeneous sensor-fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220019862A1 US20220019862A1 (en) | 2022-01-20 |
US11768920B2 true US11768920B2 (en) | 2023-09-26 |
Family
ID=79291530
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/094,499 Active 2041-02-26 US11768920B2 (en) | 2020-07-20 | 2020-11-10 | Apparatus and method for performing heterogeneous sensor fusion |
Country Status (2)
Country | Link |
---|---|
US (1) | US11768920B2 (en) |
KR (1) | KR20220010929A (en) |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060245617A1 (en) * | 2005-03-30 | 2006-11-02 | Ying Shan | Object identification between non-overlapping cameras without direct feature matching |
US20140333722A1 (en) * | 2013-05-13 | 2014-11-13 | Samsung Electronics Co., Ltd. | Apparatus and method of processing depth image using relative angle between image sensor and target object |
US20140347475A1 (en) * | 2013-05-23 | 2014-11-27 | Sri International | Real-time object detection, tracking and occlusion reasoning |
US20170083765A1 (en) * | 2015-09-23 | 2017-03-23 | Behavioral Recognition Systems, Inc. | Detected object tracker for a video analytics system |
US9729865B1 (en) * | 2014-06-18 | 2017-08-08 | Amazon Technologies, Inc. | Object detection and tracking |
US10108867B1 (en) * | 2017-04-25 | 2018-10-23 | Uber Technologies, Inc. | Image-based pedestrian detection |
US20180330175A1 (en) * | 2017-05-10 | 2018-11-15 | Fotonation Limited | Multi-camera vision system and method of monitoring |
US10140855B1 (en) * | 2018-08-24 | 2018-11-27 | Iteris, Inc. | Enhanced traffic detection by fusing multiple sensor data |
US20190294889A1 (en) * | 2018-03-26 | 2019-09-26 | Nvidia Corporation | Smart area monitoring with artificial intelligence |
US10429839B2 (en) * | 2014-09-05 | 2019-10-01 | SZ DJI Technology Co., Ltd. | Multi-sensor environmental mapping |
US10468062B1 (en) * | 2018-04-03 | 2019-11-05 | Zoox, Inc. | Detecting errors in sensor data |
US20200218913A1 (en) * | 2019-01-04 | 2020-07-09 | Qualcomm Incorporated | Determining a motion state of a target object |
US10859684B1 (en) * | 2019-11-12 | 2020-12-08 | Huawei Technologies Co., Ltd. | Method and system for camera-lidar calibration |
US20210019365A1 (en) * | 2019-07-18 | 2021-01-21 | Adobe Inc. | Correction Techniques of Overlapping Digital Glyphs |
US20210110168A1 (en) * | 2019-10-10 | 2021-04-15 | Beijing Baidu Netcom Science Technology Co., Ltd. | Object tracking method and apparatus |
US20210224572A1 (en) * | 2020-01-21 | 2021-07-22 | Vanadata Inc. | Image analysis-based classification and visualization of events |
US20220019845A1 (en) * | 2019-04-03 | 2022-01-20 | Huawei Technologies Co., Ltd. | Positioning Method and Apparatus |
US20220327737A1 (en) * | 2019-12-13 | 2022-10-13 | Ohio University | Determining position using computer vision, lidar, and trilateration |
-
2020
- 2020-07-20 KR KR1020200089670A patent/KR20220010929A/en active Search and Examination
- 2020-11-10 US US17/094,499 patent/US11768920B2/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060245617A1 (en) * | 2005-03-30 | 2006-11-02 | Ying Shan | Object identification between non-overlapping cameras without direct feature matching |
US20140333722A1 (en) * | 2013-05-13 | 2014-11-13 | Samsung Electronics Co., Ltd. | Apparatus and method of processing depth image using relative angle between image sensor and target object |
US20140347475A1 (en) * | 2013-05-23 | 2014-11-27 | Sri International | Real-time object detection, tracking and occlusion reasoning |
US9729865B1 (en) * | 2014-06-18 | 2017-08-08 | Amazon Technologies, Inc. | Object detection and tracking |
US10429839B2 (en) * | 2014-09-05 | 2019-10-01 | SZ DJI Technology Co., Ltd. | Multi-sensor environmental mapping |
US20170083765A1 (en) * | 2015-09-23 | 2017-03-23 | Behavioral Recognition Systems, Inc. | Detected object tracker for a video analytics system |
US10108867B1 (en) * | 2017-04-25 | 2018-10-23 | Uber Technologies, Inc. | Image-based pedestrian detection |
US20180330175A1 (en) * | 2017-05-10 | 2018-11-15 | Fotonation Limited | Multi-camera vision system and method of monitoring |
US20190294889A1 (en) * | 2018-03-26 | 2019-09-26 | Nvidia Corporation | Smart area monitoring with artificial intelligence |
US10468062B1 (en) * | 2018-04-03 | 2019-11-05 | Zoox, Inc. | Detecting errors in sensor data |
US10140855B1 (en) * | 2018-08-24 | 2018-11-27 | Iteris, Inc. | Enhanced traffic detection by fusing multiple sensor data |
US20200218913A1 (en) * | 2019-01-04 | 2020-07-09 | Qualcomm Incorporated | Determining a motion state of a target object |
US20220019845A1 (en) * | 2019-04-03 | 2022-01-20 | Huawei Technologies Co., Ltd. | Positioning Method and Apparatus |
US20210019365A1 (en) * | 2019-07-18 | 2021-01-21 | Adobe Inc. | Correction Techniques of Overlapping Digital Glyphs |
US20210110168A1 (en) * | 2019-10-10 | 2021-04-15 | Beijing Baidu Netcom Science Technology Co., Ltd. | Object tracking method and apparatus |
US10859684B1 (en) * | 2019-11-12 | 2020-12-08 | Huawei Technologies Co., Ltd. | Method and system for camera-lidar calibration |
US20220327737A1 (en) * | 2019-12-13 | 2022-10-13 | Ohio University | Determining position using computer vision, lidar, and trilateration |
US20210224572A1 (en) * | 2020-01-21 | 2021-07-22 | Vanadata Inc. | Image analysis-based classification and visualization of events |
Also Published As
Publication number | Publication date |
---|---|
US20220019862A1 (en) | 2022-01-20 |
KR20220010929A (en) | 2022-01-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10782399B2 (en) | Object detecting method and apparatus using light detection and ranging (LIDAR) sensor and radar sensor | |
US20200117917A1 (en) | Apparatus and Method for Distinguishing False Target in Vehicle and Vehicle Including the Same | |
US10452999B2 (en) | Method and a device for generating a confidence measure for an estimation derived from images captured by a camera mounted on a vehicle | |
JP6349742B2 (en) | Multi-lane detection method and detection system | |
EP2657644B1 (en) | Positioning apparatus and positioning method | |
US11754701B2 (en) | Electronic device for camera and radar sensor fusion-based three-dimensional object detection and operating method thereof | |
US10147015B2 (en) | Image processing device, image processing method, and computer-readable recording medium | |
US11094080B2 (en) | Method and device for determining whether a hand cooperates with a manual steering element of a vehicle | |
CN110969079A (en) | Object detection system for a vehicle | |
CN112329505A (en) | Method and apparatus for detecting an object | |
US11748593B2 (en) | Sensor fusion target prediction device and method for vehicles and vehicle including the device | |
JP6217373B2 (en) | Operation determination method, operation determination apparatus, and operation determination program | |
US20220120858A1 (en) | Method and device for detecting objects | |
US20220332327A1 (en) | Method and Apparatus for Fusing Sensor Information and Recording Medium Storing Program to Execute the Method | |
US9183748B2 (en) | Apparatus for determining available parking space and method thereof | |
US9418443B2 (en) | Apparatus and method for detecting obstacle | |
US20220058895A1 (en) | Apparatus and method for adjusting confidence level of output of sensor | |
US11971257B2 (en) | Method and apparatus with localization | |
US20170270682A1 (en) | Estimation apparatus, estimation method, and computer program product | |
EP4258078A1 (en) | Positioning method and apparatus, and vehicle | |
US11768920B2 (en) | Apparatus and method for performing heterogeneous sensor fusion | |
WO2018220824A1 (en) | Image discrimination device | |
Vaida et al. | Automatic extrinsic calibration of LIDAR and monocular camera images | |
CN114997264A (en) | Training data generation method, model training method, model detection method, device and electronic equipment | |
US20200370893A1 (en) | Device and method for compensating for route of autonomous vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: KIA MOTORS CORPORATION, KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KIM, SEONG HWAN;REEL/FRAME:055234/0886 Effective date: 20201028 Owner name: HYUNDAI MOTOR COMPANY, KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KIM, SEONG HWAN;REEL/FRAME:055234/0886 Effective date: 20201028 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |