US20210406618A1 - Electronic device for vehicle and method of operating electronic device for vehicle - Google Patents

Electronic device for vehicle and method of operating electronic device for vehicle Download PDF

Info

Publication number
US20210406618A1
US20210406618A1 US16/603,049 US201916603049A US2021406618A1 US 20210406618 A1 US20210406618 A1 US 20210406618A1 US 201916603049 A US201916603049 A US 201916603049A US 2021406618 A1 US2021406618 A1 US 2021406618A1
Authority
US
United States
Prior art keywords
processor
vehicle
data
depth image
time point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/603,049
Other versions
US11507789B2 (en
Inventor
Chanho Park
Kyunghee KIM
Taehui Yun
Dongha Lee
Gaehwan CHO
Jooyoung Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of US20210406618A1 publication Critical patent/US20210406618A1/en
Application granted granted Critical
Publication of US11507789B2 publication Critical patent/US11507789B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06K9/6289
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G06K9/00791
    • G06K9/3233
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/759Region-based matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0001Arrangements for holding or mounting articles, not otherwise provided for characterised by position
    • B60R2011/0003Arrangements for holding or mounting articles, not otherwise provided for characterised by position inside the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/408Radar; Laser, e.g. lidar
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/35Data fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/14Cruise control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present disclosure relates to an electronic device for a vehicle and a method of operating an electronic device for a vehicle.
  • a vehicle is an apparatus that carries a passenger in the direction intended by the passenger.
  • a car is the main example of such a vehicle.
  • An autonomous vehicle is a vehicle that is capable of traveling autonomously without driving operation by a driver.
  • Such an autonomous vehicle is provided with a plurality of sensors for detecting objects outside the vehicle. Examples of such sensors include a camera, a lidar, a radar, an ultrasonic sensor, and the like.
  • the present disclosure has been made in view of the above problems, and it is an object of the present disclosure to provide an electronic device for a vehicle that performs low-level sensor fusion.
  • an electronic device for a vehicle including a processor receiving first image data from a first camera, receiving second image data from a second camera, receiving first sensing data from a first lidar, generating a depth image based on the first image data and the second image data, and performing fusion of the first sensing data for each of divided regions in the depth image.
  • an electronic device for a vehicle including a first sensing device mounted in a vehicle, the first sensing device including a first camera generating first image data and a first lidar generating first sensing data, a second sensing device mounted in the vehicle so as to be spaced apart from the first sensing device, the second sensing device including a second camera generating second image data and a second lidar generating second sensing data, and at least one processor generating a depth image based on the first image data and the second image data and performing fusion of the first sensing data and the second sensing data for each of divided regions in the depth image.
  • FIG. 1 is a view illustrating the external appearance of a vehicle according to an embodiment of the present disclosure.
  • FIG. 2 is a control block diagram of the vehicle according to the embodiment of the present disclosure.
  • FIG. 3 is a control block diagram of an electronic device for a vehicle according to an embodiment of the present disclosure.
  • FIG. 4 is a view for explaining the electronic device for a vehicle according to the embodiment of the present disclosure.
  • FIG. 5 is a view for explaining a first sensing device and a second sensing device according to the embodiment of the present disclosure.
  • FIG. 6 is a view for explaining a lidar according to the embodiment of the present disclosure.
  • FIG. 7 is a view for explaining a camera according to the embodiment of the present disclosure.
  • FIG. 8 is a view for explaining a depth image according to the embodiment of the present disclosure.
  • FIG. 9 is a view for explaining a depth image, which is divided into a plurality of regions, according to the embodiment of the present disclosure.
  • FIG. 10 is a view for explaining a table in which distance values for respective colors are arranged according to the embodiment of the present disclosure.
  • FIG. 11 is a view for explaining the operation of performing sensor fusion using a SLAM algorithm according to the embodiment of the present disclosure.
  • FIG. 12 is a view for explaining the operation of performing sensor fusion using V2X according to the embodiment of the present disclosure.
  • FIGS. 13 and 14 are views for explaining the operation of performing high-level fusion and low-level fusion according to the embodiment of the present disclosure.
  • FIG. 1 is a view illustrating a vehicle according to an embodiment of the present disclosure.
  • a vehicle 10 is defined as a transportation means that travels on a road or on rails.
  • the vehicle 10 conceptually encompasses cars, trains, and motorcycles.
  • the vehicle 10 may be any of an internal combustion vehicle equipped with an engine as a power source, a hybrid vehicle equipped with an engine and an electric motor as power sources, an electric vehicle equipped with an electric motor as a power source, and the like.
  • the vehicle 10 may be a shared vehicle.
  • the vehicle 10 may be an autonomous vehicle.
  • the vehicle 10 may include an electronic device 100 .
  • the electronic device 100 may be a device that generates information about objects outside the vehicle.
  • the information about objects may include information about the presence or absence of an object, information about the location of an object, information about the distance between the vehicle 100 and an object, and information about the relative speed of the vehicle 100 with respect to an object.
  • An Object may include at least one of a lane, another vehicle, a pedestrian, a 2-wheeled vehicle, a traffic signal, a light, a road, a structure, a speed bump, a geographic feature, or an animal.
  • FIG. 2 is a control block diagram of the vehicle according to the embodiment of the present disclosure.
  • the vehicle 10 may include a vehicular electronic device 100 , a user interface device 200 , a vehicular electronic device 100 , a communication device 220 , a driving operation device 230 , a main ECU 240 , a vehicle-driving device 250 , a traveling system 260 , a sensing unit 270 , and a location-data-generating device 280 .
  • the vehicular electronic device 100 may detect objects outside the vehicle 10 .
  • the vehicular electronic device 100 may include at least one sensor capable of detecting objects outside the vehicle 10 .
  • the vehicular electronic device 100 may include at least one of a camera, a radar, a lidar, an ultrasonic sensor, or an infrared sensor.
  • the vehicular electronic device 100 may provide data on an object, which is generated based on a sensing signal generated by a sensor, to at least one electronic device included in the vehicle.
  • the vehicular electronic device 100 may be referred to as an object detection device.
  • the user interface device 200 is a device used to enable the vehicle 10 to communicate with a user.
  • the user interface device 200 may receive user input and may provide information generated by the vehicle 10 to the user.
  • the vehicle 10 may implement a User Interface (UI) or a User Experience (UX) through the user interface device 200 .
  • UI User Interface
  • UX User Experience
  • the communication device 220 may exchange signals with devices located outside the vehicle 10 .
  • the communication device 220 may exchange signals with at least one of infrastructure (e.g. a server or a broadcasting station) or other vehicles.
  • the communication device 220 may include at least one of a transmission antenna, a reception antenna, a Radio-Frequency (RF) circuit capable of implementing various communication protocols, or an RF device.
  • RF Radio-Frequency
  • the driving operation device 230 is a device that receives user input for driving the vehicle. In the manual mode, the vehicle 10 may travel based on a signal provided by the driving operation device 230 .
  • the driving operation device 230 may include a steering input device (e.g. a steering wheel), an acceleration input device (e.g. an accelerator pedal), and a brake input device (e.g. a brake pedal).
  • the main ECU 240 may control the overall operation of at least one electronic device provided in the vehicle 10 .
  • the driving control device 250 is a device that electrically controls various vehicle-driving devices provided in the vehicle 10 .
  • the driving control device 250 may include a powertrain driving controller, a chassis driving controller, a door/window driving controller, a safety device driving controller, a lamp driving controller, and an air-conditioner driving controller.
  • the powertrain driving controller may include a power source driving controller and a transmission driving controller.
  • the chassis driving controller may include a steering driving controller, a brake driving controller, and a suspension driving controller.
  • the safety device driving controller may include a seat belt driving controller for controlling the seat belt.
  • the vehicle driving control device 250 may be referred to as a control electronic control unit (a control ECU).
  • a control ECU control electronice control unit
  • the traveling system 260 may generate a signal for controlling the movement of the vehicle 10 or outputting information to the user based on the data on an object received from the vehicular electronic device 100 .
  • the traveling system 260 may provide the generated signal to at least one of the user interface device 200 , the main ECU 240 , or the vehicle-driving device 250 .
  • the traveling system 260 may conceptually include an Advanced Driver Assistance System (ADAS).
  • ADAS Advanced Driver Assistance System
  • the ADAS 260 may implement at least one of Adaptive Cruise Control (ACC), Autonomous Emergency Braking (AEB), Forward Collision Warning (FCW), Lane Keeping Assist (LKA), Lane Change Assist (LCA), Target Following Assist (TFA), Blind Spot Detection (BSD), High Beam Assist (HBA), Auto Parking System (APS), PD collision warning system, Traffic Sign Recognition (TSR), Traffic Sign Assist (TSA), Night Vision (NV), Driver Status Monitoring (DSM), or Traffic Jam Assist (TJA).
  • ACC Adaptive Cruise Control
  • AEB Autonomous Emergency Braking
  • FCW Forward Collision Warning
  • LKA Lane Keeping Assist
  • LKA Lane Change Assist
  • TSA Target Following Assist
  • BSD Blind Spot Detection
  • HBA High Beam Assist
  • APS Auto Parking System
  • the traveling system 260 may include an autonomous-driving electronic control unit (an autonomous-driving ECU).
  • the autonomous-driving ECU may set an autonomous-driving route based on data received from at least one of the other electronic devices provided in the vehicle 10 .
  • the autonomous-driving ECU may set an autonomous-driving route based on data received from at least one of the user interface device 200 , the vehicular electronic device 100 , the communication device 220 , the sensing unit 270 , or the location-data-generating device 280 .
  • the autonomous-driving ECU may generate a control signal so that the vehicle 10 travels along the autonomous-driving route.
  • the control signal generated by the autonomous-driving ECU may be provided to at least one of the main ECU 240 or the vehicle-driving device 250 .
  • the sensing unit 270 may sense the state of the vehicle.
  • the sensing unit 270 may include at least one of an inertial navigation unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, an inclination sensor, a weight detection sensor, a heading sensor, a position module, a vehicle forward/reverse movement sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor for detecting rotation of the steering wheel, a vehicle internal temperature sensor, a vehicle internal humidity sensor, an ultrasonic sensor, an illuminance sensor, an accelerator pedal position sensor, or a brake pedal position sensor.
  • the inertial navigation unit (IMU) sensor may include at least one of an acceleration sensor, a gyro sensor, or a magnetic sensor.
  • the sensing unit 270 may generate data on the state of the vehicle based on the signal generated by at least one sensor.
  • the sensing unit 270 may acquire sensing signals of vehicle attitude information, vehicle motion information, vehicle yaw information, vehicle roll information, vehicle pitch information, vehicle collision information, vehicle heading information, vehicle angle information, vehicle speed information, vehicle acceleration information, vehicle inclination information, vehicle forward/reverse movement information, battery information, fuel information, tire information, vehicle lamp information, vehicle internal temperature information, vehicle internal humidity information, a steering wheel rotation angle, vehicle external illuminance, the pressure applied to the accelerator pedal, the pressure applied to the brake pedal, and so on.
  • the sensing unit 270 may further include an accelerator pedal sensor, a pressure sensor, an engine speed sensor, an air flow sensor (AFS), an air temperature sensor (ATS), a water temperature sensor (WTS), a throttle position sensor (TPS), a top dead center (TDC) sensor, a crank angle sensor (CAS), and so on.
  • AFS air flow sensor
  • ATS air temperature sensor
  • WTS water temperature sensor
  • TPS throttle position sensor
  • TDC top dead center
  • CAS crank angle sensor
  • the sensing unit 270 may generate vehicle state information based on the sensing data.
  • the vehicle state information may be generated based on data detected by various sensors included in the vehicle.
  • the vehicle state information may include vehicle attitude information, vehicle speed information, vehicle inclination information, vehicle weight information, vehicle heading information, vehicle battery information, vehicle fuel information, vehicle tire air pressure information, vehicle steering information, vehicle internal temperature information, vehicle internal humidity information, pedal position information, vehicle engine temperature information, and so on.
  • the sensing unit may include a tension sensor.
  • the tension sensor may generate a sensing signal based on the tension state of the seat belt.
  • the location-data-generating device 280 may generate data on the location of the vehicle 10 .
  • the location-data-generating device 280 may include at least one of a global positioning system (GPS) or a differential global positioning system (DGPS).
  • GPS global positioning system
  • DGPS differential global positioning system
  • the location-data-generating device 280 may generate data on the location of the vehicle 10 based on the signal generated by at least one of the GPS or the DGPS.
  • the location-data-generating device 280 may correct the location data based on at least one of the inertial measurement unit (IMU) of the sensing unit 270 or the camera of the vehicular electronic device 100 .
  • IMU inertial measurement unit
  • the location-data-generating device 280 may be referred to as a location positioning device.
  • the location-data-generating device 280 may be referred to as a global navigation satellite system (GNSS).
  • GNSS global navigation satellite system
  • the vehicle 10 may include an internal communication system 50 .
  • the electronic devices included in the vehicle 10 may exchange signals via the internal communication system 50 .
  • the signals may include data.
  • the internal communication system 50 may use at least one communication protocol (e.g. CAN, LIN, FlexRay, MOST, and Ethernet).
  • FIG. 3 is a control block diagram of the electronic device according to the embodiment of the present disclosure.
  • the electronic device 100 may include a memory 140 , a processor 170 , an interface unit 180 , and a power supply unit 190 .
  • the memory 140 is electrically connected to the processor 170 .
  • the memory 140 may store default data for a unit, control data for controlling the operation of the unit, and input and output data.
  • the memory 140 may store data processed by the processor 170 .
  • the memory 140 may be implemented as at least one hardware device selected from among read only memory (ROM), random access memory (RAM), erasable and programmable ROM (EPROM), a flash drive, or a hard drive.
  • the memory 140 may store various data for the overall operation of the electronic device 100 , such as programs for processing or control in the processor 170 .
  • the memory 140 may be integrated with the processor 170 . In some embodiments, the memory 140 may be configured as a lower-level component of the processor 170 .
  • the interface unit 180 may exchange signals with at least one electronic device provided in the vehicle 10 in a wired or wireless manner.
  • the interface unit 280 may exchange signals with at least one of the vehicular electronic device 100 , the communication device 220 , the driving operation device 230 , the main ECU 140 , the vehicle-driving device 250 , the ADAS 260 , the sensing unit 170 , or the location-data-generating device 280 in a wired or wireless manner.
  • the interface unit 280 may be configured as at least one of a communication module, a terminal, a pin, a cable, a port, a circuit, an element, or a device.
  • the interface unit 180 may receive location data of the vehicle 10 from the location-data-generating device 280 .
  • the interface unit 180 may receive travel speed data from the sensing unit 270 .
  • the interface unit 180 may receive data on objects around the vehicle from the vehicular electronic device 100 .
  • the power supply unit 190 may supply power to the electronic device 100 .
  • the power supply unit 190 may receive power from a power source (e.g. a battery) included in the vehicle 10 , and may supply the power to each unit of the electronic device 100 .
  • the power supply unit 190 may be operated in response to a control signal from the main ECU 140 .
  • the power supply unit 190 may be implemented as a switched-mode power supply (SMPS).
  • SMPS switched-mode power supply
  • the processor 170 may be electrically connected to the memory 140 , the interface unit 280 , and the power supply unit 190 , and may exchange signals with the same.
  • the processor 170 may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, or electrical units for performing other functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, or electrical units for performing other functions.
  • the processor 170 may be driven by the power supplied from the power supply unit 190 .
  • the processor 170 may receive data, process data, generate a signal, and provide a signal while receiving the power from the power supply unit 190 .
  • the processor 170 may receive information from other electronic devices in the vehicle 10 through the interface unit 180 .
  • the processor 170 may provide a control signal to other electronic devices in the vehicle 10 through the interface unit 180 .
  • the processor 170 may fuse sensing data, received from various types of sensors, at a low level.
  • the processor 170 may fuse the image data received from the camera and the sensing data received from the lidar at a low level.
  • the processor 170 may generate a depth image based on the image data photographed by stereo cameras (a first camera and a second camera).
  • the processor 170 may receive first image data from the first camera.
  • the first camera may be mounted in the vehicle 10 and may capture an image of the surroundings of the vehicle 10 .
  • the processor 170 may receive second image data from the second camera.
  • the second camera may be mounted in the vehicle 10 at a position different from that of the first camera and may capture an image of the surroundings of the vehicle 10 .
  • the second camera may be oriented in the same direction as the first camera.
  • both the first camera and the second camera may be oriented in the forward direction of the vehicle 10 .
  • both the first camera and the second camera may be oriented in the backward direction of the vehicle 10 .
  • both the first camera and the second camera may be oriented in the leftward direction of the vehicle 10 .
  • both the first camera and the second camera may be oriented in the rightward direction of the vehicle 10 .
  • the processor 170 may receive first sensing data from a first lidar.
  • the first lidar may be mounted in the vehicle 10 , may emit a laser signal toward the outside of the vehicle 10 , and may receive a laser signal reflected by an object.
  • the first sensing data may be data that is generated based on an emitted laser signal and a received laser signal.
  • the processor 170 may generate a depth image based on the first image data and the second image data.
  • the processor 170 may detect a disparity based on the first image data and the second image data.
  • the processor 170 may generate a depth image based on information about the disparity.
  • the depth image may be configured as a red-green-blue (RGB) image.
  • RGB red-green-blue
  • the depth image may be understood as a depth map.
  • the processor 170 may generate a depth image based on image data photographed by a mono camera.
  • the processor 170 may receive first image data photographed at a first time point from the first camera.
  • the processor 170 may receive second image data photographed at a second time point, which is different from the first time point, from the first camera.
  • the location of the first camera at the second time point may be different from the location of the first camera at the first time point.
  • the processor 170 may generate a depth image based on the first image data and the second image data received from the first camera.
  • the processor 170 may detect an object based on the depth image. For example, the processor 170 may detect, based on the depth image, at least one of a lane, another vehicle, a pedestrian, a 2-wheeled vehicle, a traffic signal, a light, a road, a structure, a speed bump, a geographic feature, or an animal.
  • the processor 170 may fuse first sensing data received from the first lidar into each region divided in the depth image.
  • the processor 170 may apply the first sensing data to the depth image. By applying the first sensing data to the depth image, a more accurate value of the distance to the object may be acquired than when using only the depth image.
  • the processor 170 may acquire RGB-level data for each region of the depth image.
  • the processor 170 may refer to a table stored in the memory 140 .
  • the table may be a table in which distance values for RGB levels are arranged.
  • the processor 170 may acquire a distance value for each region of the depth image based on the table.
  • the region may be composed of one or a plurality of pixels.
  • the table may be acquired through experimentation and may be stored in advance in the memory 140 .
  • the processor 170 may divide the depth image into a plurality of regions, each of which has a first area.
  • the processor 170 may acquire RGB-level data for each divided region.
  • the processor 170 may divide the depth image such that each of the plurality of regions has a size in which two or three beam points of the first lidar are formed. The two or three beam points of the first lidar may match the first area.
  • the processor 170 may acquire a distance value corresponding to the RGB level for each of the divided regions from the table.
  • the processor 170 may divide the depth image into a plurality of first regions, each having a first area, and a plurality of second regions, each having a second area, which is larger than the first area.
  • the processor 170 may acquire RGB-level data for each of the divided regions.
  • the processor 170 may acquire a distance value corresponding to the RGB level for each of the divided regions from the table.
  • the processor 170 may set a region of interest in the depth image.
  • the region of interest may be defined as a region in which the probability that an object is located is high.
  • the processor 170 may divide the region of interest into the plurality of first regions.
  • the processor 170 may set a region, other than the region of interest, as the plurality of second regions in the depth image.
  • the region, other than the region of interest may be the upper region of the image, which corresponds to the sky, or the lower region of the image, which corresponds to the ground.
  • the processor 170 may fuse the depth image and the first sensing data using a simultaneous localization and mapping (SLAM) algorithm.
  • the processor 170 may receive first image data photographed at a first time point from the first camera.
  • the processor 170 may receive second image data photographed at the first time point from the second camera.
  • the processor 170 may receive first sensing data acquired at the first time point from the first lidar.
  • the processor 170 may generate a depth image of the first time point based on the first image data photographed at the first time point and the second image data photographed at the first time point.
  • the processor 170 may fuse the first sensing data acquired at the first time point for each of the divided regions in the depth image captured at the first time point.
  • the processor 170 may receive first image data photographed at a second time point from the first camera.
  • the processor 170 may receive second image data photographed at the second time point from the second camera.
  • the processor 170 may generate a depth image of the second time point based on the first image data photographed at the second time point and the second image data photographed at the second time point.
  • the processor 170 may receive first sensing data acquired at the second time point from the first lidar.
  • the processor 170 may fuse the first sensing data acquired at the second time point for each of the divided regions in the depth image of the first time point.
  • the processor 170 may fuse the first sensing data acquired at the second time point for each of the divided regions in the depth image of the second time point.
  • the processor 170 may acquire a moving distance value of the vehicle 10 from the first time point to the second time point.
  • the processor 170 may acquire a moving distance value from the first time point to the second time point based on wheel data received from the wheel sensor of the vehicle 10 .
  • the processor 170 may acquire a moving distance value from the first time point to the second time point based on location data generated by the location-data-generating device 280 .
  • the processor 170 may acquire a moving distance value from the first time point to the second time point based on a sensing value of the object detection sensor (e.g. a camera, a radar, a lidar, an ultrasonic sensor, an infrared sensor, etc.).
  • the processor 170 may apply the moving distance value to the depth image captured at the first time point, and may fuse the first sensing data acquired at the second time point for the depth image.
  • the processor 170 may receive location data of another vehicle through the communication device 220 .
  • the processor 170 may detect an object corresponding to the other vehicle from the depth image.
  • the processor 170 may correct the value of the distance between the vehicle 10 and the other vehicle based on the location data.
  • the processor 170 may receive second sensing data from the second lidar.
  • the second lidar may be mounted in the vehicle 10 at a position different from that of the first lidar, may emit a laser signal toward the outside of the vehicle 10 , and may receive a laser signal reflected by an object.
  • the second lidar may be oriented in the same direction as the first lidar.
  • both the first lidar and the second lidar may be oriented in the forward direction of the vehicle 10 .
  • both the first lidar and the second lidar may be oriented in the backward direction of the vehicle 10 .
  • both the first lidar and the second lidar may be oriented in the leftward direction of the vehicle 10 .
  • both the first lidar and the second lidar may be oriented in the rightward direction of the vehicle 10 .
  • the processor 170 may further fuse the second sensing data for each of the divided regions in the depth image.
  • the electronic device 100 may include at least one printed circuit board (PCB).
  • the memory 140 , the interface unit 180 , the power supply unit 190 , and the processor 170 may be electrically connected to the printed circuit board.
  • FIG. 4 is a view for explaining the electronic device for a vehicle according to the embodiment of the present disclosure.
  • the vehicular electronic device 100 may include a first sensing device 301 , a second sensing device 302 , a memory 140 , a processor 170 , and an interface unit 180 . Although not shown in FIG. 4 , the vehicular electronic device 100 may further include a power supply unit.
  • the first sensing device 301 may be mounted in the vehicle 10 .
  • the first sensing device 301 may be mounted in the vehicle 10 so as to enable adjustment of the posture thereof.
  • the first sensing device 301 may enable adjustment of the posture thereof in roll, pitch, and yaw directions.
  • the first sensing device 301 may include a first camera 311 for generating first image data and a first lidar 321 for generating first sensing data.
  • the first sensing device 301 may be implemented in a modular form in which the first camera 311 and the first lidar 321 are coupled while being disposed side by side. The description of the first camera and the first lidar made with reference to FIG. 3 may be applied to the first camera 311 and the first lidar 321 .
  • Each of the first camera 311 and the first lidar 321 may enable adjustment of the posture thereof in roll, pitch, and yaw directions.
  • the first sensing device 301 may be mounted in the vehicle 10 at the same height as the second sensing device 302 with respect to the ground.
  • the first camera 311 may be mounted in the vehicle 10 such that a line connecting a first principal point of a first image acquired by the first camera 311 and a second principal point of a second image acquired by the second camera 312 is parallel to a horizontal line.
  • the second sensing device 302 may be mounted in the vehicle 10 so as to be spaced apart from the first sensing device 301 .
  • the second sensing device 302 may be mounted in the vehicle 10 so as to enable adjustment of the posture thereof.
  • the second sensing device 302 may enable adjustment of the posture thereof in roll, pitch, and yaw directions.
  • the second sensing device 302 may include a second camera 312 for generating second image data and a second lidar 322 for generating second sensing data.
  • the second sensing device 302 may be implemented in a modular form in which the second camera 312 and the second lidar 322 are coupled while being disposed side by side. The description of the second camera and the second lidar made with reference to FIG. 3 may be applied to the second camera 312 and the second lidar 322 .
  • Each of the second camera 312 and the second lidar 322 may enable adjustment of the posture thereof in roll, pitch, and yaw directions.
  • the second camera 312 may be mounted in the vehicle 10 such that a line connecting a second principal point of a second image acquired by the second camera 312 and a first principal point of a first image acquired by the first camera 311 is parallel to a horizontal line.
  • the second sensing device 302 may be mounted in the vehicle 10 at the same height as the first sensing device 301 with respect to the ground.
  • the description made with reference to FIG. 3 may be applied to the memory 140 .
  • the description made with reference to FIG. 3 may be applied to the interface unit 180 .
  • the description made with reference to FIG. 3 may be applied to the power supply unit.
  • the description made with reference to FIG. 3 may be applied to the processor 170 .
  • the processor 170 may generate a depth image based on first image data and second image data.
  • the processor 170 may fuse the first sensing data and the second sensing data for each of the divided regions in the depth image.
  • the processor 170 may receive first image data from the first camera 311 (S 410 ).
  • the processor 170 may receive second image data from the second camera 312 (S 415 ).
  • the processor 170 may receive first sensing data from the first lidar 321 (S 420 ).
  • the processor 170 may receive second sensing data from the second lidar 322 (S 425 ).
  • the processor 170 may generate a depth image based on the first image data and the second image data (S 430 ).
  • the processor 170 may fuse the first sensing data for each of the divided regions in the depth image (S 435 ).
  • the fusing step S 435 may include acquiring, by at least one processor, red-green-blue (RGB)-level data for each region of the depth image, and acquiring, by the at least one processor, a distance value for each region of the depth image based on the table in which distance values for RGB levels are arranged.
  • RGB red-green-blue
  • the fusing step S 435 may include dividing, by the at least one processor, the depth image into a plurality of regions, each having a first area, to acquire RGB-level data for each of the divided regions, and acquiring, by the at least one processor, a distance value corresponding to the RGB level for each of the divided regions from the table.
  • the fusing step S 435 may include dividing, by the at least one processor, the depth image such that each of the plurality of regions has a size in which two or three beam points of the first lidar are formed.
  • the fusing step S 435 may include dividing, by the at least one processor, the depth image into a plurality of first regions, each having a first area, and a plurality of second regions, each having a second area, which is larger than the first area, to acquire RGB-level data for each of the divided regions, and acquiring, by the at least one processor, a distance value corresponding to the RGB level for each of the divided regions from the table.
  • the fusing step S 435 may include setting, by the at least one processor, a region of interest in the depth image to divide the region of interest into a plurality of first regions, and dividing, by the at least one processor, a region, other than the region of interest, into a plurality of second regions in the depth image.
  • the fusing step S 435 may include fusing, by the at least one processor, the depth image and the first sensing data using a simultaneous localization and mapping (SLAM) algorithm.
  • SLAM simultaneous localization and mapping
  • the fusing step S 435 may include receiving, by the at least one processor, first image data photographed at a first time point from the first camera, receiving, by the at least one processor, second image data photographed at the first time point from the second camera, receiving, by the at least one processor, first sensing data acquired at the first time point from the first lidar, generating, by the at least one processor, a depth image of the first time point based on the first image data photographed at the first time point and the second image data photographed at the first time point, and fusing, by the at least one processor, the first sensing data acquired at the first time point for each of the divided regions in the depth image captured at the first time point.
  • the fusing step S 435 may further include receiving, by the at least one processor, first image data photographed at a second time point from the first camera, receiving, by the at least one processor, second image data photographed at the second time point from the second camera, and generating, by the at least one processor, a depth image of the second time point based on the first image data photographed at the second time point and the second image data photographed at the second time point.
  • the fusing step (S 435 ) may further include receiving, by the at least one processor, first sensing data acquired at the second time point from the first lidar, fusing, by the at least one processor, the first sensing data acquired at the second time point for each of the divided regions in the depth image of the first time point, and fusing, by the at least one processor, the first sensing data acquired at the second time point for each of the divided regions in the depth image of the second time point.
  • the fusing step S 435 may further include acquiring, by the at least one processor, a moving distance value of the vehicle from the first time point to the second time point, and applying, by the at least one processor, the moving distance value to the depth image captured at the first time point to fuse the first sensing data acquired at the second time point into the depth image.
  • the fusing step S 435 may include receiving, by the at least one processor, location data of another vehicle through the communication device, and detecting, by the at least one processor, an object corresponding to the other vehicle from the depth image to correct the value of the distance between the vehicle and the other vehicle based on the location data.
  • the fusing step S 435 may further include receiving, by the at least one processor, second sensing data from the second lidar, and fusing, by the at least one processor, the second sensing data for each of the divided regions in the depth image.
  • FIG. 5 is a view for explaining the first sensing device and the second sensing device according to the embodiment of the present disclosure.
  • a conventional object detection device determines the type of an object through deep learning of a depth image of an RGB image.
  • the object detection device using an image has lower accuracy with respect to a distance to an object than when using another sensor.
  • the sensing devices 301 and 302 according to the present disclosure may generate an RGB-D map by mapping distance information from the lidar to RGB-pixel information, and may perform deep learning using the RGB-D map information, thereby accurately detecting information about the distance to the object.
  • the processor 170 may further perform generation of an RGB-D map and deep learning based on the RGB-D map.
  • the above-described method of detecting an object by fusing raw data of distance information into an image may be described as low-level sensor fusion.
  • a general vehicle may include a low-priced lidar.
  • a lidar that is manufactured at low cost may have a relatively small number of layers.
  • each of the first lidar 321 and the second lidar 322 may have four layers. Since the layers of such a lidar are not disposed densely, distance information with respect to pixels of an image may not be accurate.
  • the electronic device for a vehicle may implement precise RBG-D by fusing sensing data of the lidar into a depth image.
  • the first sensing device 301 may include a first camera 311 and a first lidar 321 .
  • the second sensing device 302 may include a second camera 312 and a second lidar 322 .
  • the first sensing device 301 and the second sensing device 302 may be mounted in the vehicle 10 .
  • the first and second sensing devices 301 and 302 may be mounted in at least one of a bumper, a portion of the inner side of a windshield (the side oriented toward a cabin), a headlamp, a side mirror, a radiator grill, or a roof.
  • a bumper a portion of the inner side of a windshield (the side oriented toward a cabin), a headlamp, a side mirror, a radiator grill, or a roof.
  • the first sensing device 301 and the second sensing device 302 are mounted in the inner side of the windshield, this is advantageous in that a separate device for removing foreign substances is not necessary due to a wiper.
  • the first sensing device 301 and the second sensing device 302 are mounted in the roof, this is advantageous in terms of heat dissipation.
  • first sensing device 310 and the second sensing device 302 are mounted in the roof, this is advantageous in terms of an increase in sensing distance.
  • first sensing device 301 and the second sensing device 302 are mounted in the headlamp or the radiator grill, they may not be exposed outside and thus may provide an aesthetic improvement.
  • first sensing device 301 and the second sensing device 302 are mounted in the side mirror, this is advantageous in terms of linkage with other sensors mounted in the side mirror.
  • the sensing distance thereof may increase.
  • the first sensing device 301 may be mounted at the same height as the second sensing device 302 . Since the first sensing device 301 and the second sensing device 302 are mounted at the same height, a depth image may be generated. Since the first sensing device 301 and the second sensing device 302 are mounted at the same height, sensing data acquired from the first lidar 321 and the second lidar 322 may be matched to the depth image.
  • FIG. 6 is a view for explaining the lidar according to the embodiment of the present disclosure.
  • each of the first lidar 321 and the second lidar 322 may have N channel layers.
  • first sensing data of the first lidar 321 and second sensing data of the second lidar 322 may be used.
  • the lidars may operate like a lidar having 2N channel layers. That is, the first lidar 321 and the second lidar 322 may operate as a lidar capable of realizing 2N-point vertical detection.
  • FIG. 7 is a view for explaining the camera according to the embodiment of the present disclosure.
  • the first camera 311 may be disposed parallel to the second camera 312 .
  • a first image sensor of the first camera 311 may be disposed parallel to a second image sensor of the second camera 312 .
  • the first camera 311 and the second camera 312 may be mounted in the vehicle 10 such that a line connecting a first principal point of a first image acquired by the first camera 311 and a second principal point of a second image acquired by the second camera 312 is parallel to a horizontal line in each of the first image and the second image.
  • the first camera 311 and the second camera 312 may be mounted in the vehicle 10 such that a line connecting a first principal point of a first image acquired by the first camera 311 and a second principal point of a second image acquired by the second camera 312 is parallel to the ground.
  • a sensing distance increases, which is advantageous in terms of long-distance sensing.
  • a sensing distance decreases, which is advantageous in terms of near field sensing.
  • the operation of the vehicular electronic device will be described with reference to FIGS. 8 to 14 .
  • the operation to be described below may be performed by the processor of the vehicular electronic device.
  • FIG. 8 is a view for explaining a depth image according to the embodiment of the present disclosure.
  • reference numeral 810 denotes an image captured by the first camera 311 or the second camera 312 .
  • Reference numeral 820 denotes a depth image generated using a first image of the first camera 311 and a second image of the second camera 312 . It is possible to verify a relative location difference for each pixel in the depth image, but an error may occur in the distance between the vehicle 10 and an object 811 . This error may be compensated for using at least one of first sensing data of the first lidar 321 or second sensing data of the second lidar 322 . By fusing the sensing data of the lidar into the depth image, it is possible to more accurately detect an object using detection and classification of the object, distance information of the lidar, and reflection intensity information.
  • RGB-D data The result generated by fusing the sensing data of the lidar into the depth image data may be referred to as RGB-D data.
  • RGB is a value corresponding to red, green, and blue of an image.
  • D may include distance information and reflection intensity information based on the sensing data of the lidar.
  • FIG. 9 is a view for explaining a depth image, which is divided into a plurality of regions, according to the embodiment of the present disclosure.
  • the processor 170 may correct distance information using a lookup table in order to create an RGB-D map.
  • color represents distance. If the colors are the same for each RGB within a similar region range in the depth image, it is possible to perform correction using sensing data (distance information) of the lidar regardless of time.
  • the processor 170 may divide the depth image into a plurality of regions (M*N) having a first area, as indicated by reference numeral 910 .
  • the processor 170 may divide the depth image into regions such that two or three points of the lidar are formed in one region 911 of the divided regions.
  • the processor 170 may divide the depth image into regions 921 , 922 and 923 having different sizes, as indicated by reference numeral 920 .
  • the processor 170 may divide the depth image into a plurality of first regions 921 , each having a first area, a plurality of second regions 922 , each having a second area, which is larger than the first area, and a plurality of third regions 923 , each having a third area, which is larger than the second area.
  • the amount of calculation may be reduced by differentiating the sizes of the divided areas depending on the region of interest.
  • the RGB value (the RBG value in the depth image is a distance value) of a pixel, which corresponds to a point cloud of the lidar may be obtained by selecting one region from the divided regions in the depth image, thereby creating a lookup table that stores sensing data (a distance value) of the lidar corresponding to the RGB value.
  • An RGB-D map may be created by adding the distance information of the lidar obtained with respect to each pixel to the RGB image data of the camera. RGB-D may be defined as information in which distance information is included in image information. In addition to distance information, intensity or other additional information of the lidar may be assigned to D.
  • the lookup table may be created before the vehicle 10 is released from a factory, and the vehicle 10 may update the lookup table according to the settings of the vehicle while traveling.
  • FIG. 10 is a view for explaining a table in which distance values for respective colors are arranged according to the embodiment of the present disclosure.
  • the lidar correction lookup table for the depth image when the lidar correction lookup table for the depth image is completed, accurate location information of the camera may be corrected without the help of the lidar.
  • the RGB information of the colors of the depth image can be expressed up to (0 to 255, 0 to 255, 0 to 255)
  • the number of R, G and B combinations is 256, and thus it is required to create a large number of lookup tables, which may cause a problem in storage and processing of data.
  • the lookup table may be created in a (0 to 15, 0 to 15, 0 to 15) form, as indicated by reference numeral 1010 .
  • reference numeral 1020 if a single-color image is formed among red, green and blue and thus 256 distance values are obtained, it is possible to simply express the values using 256 lookup tables.
  • the size of the lookup table is estimated according to the resolution of the lidar and the RGB bit of the depth image is set according thereto, it is possible to generate a lookup table having an appropriate size.
  • FIG. 11 is a view for explaining the operation of performing sensor fusion using a SLAM algorithm according to the embodiment of the present disclosure.
  • the processor 170 may measure the moving distance of the vehicle 10 using a SLAM and may estimate the location to which the pixel of the image has moved using the measured distance. It is possible to create a map of the surroundings of the vehicle by applying a SLAM algorithm of the lidar and to accurately estimate the distance that the vehicle 10 has recently moved using the feature of the map. If using the wheel sensor of the vehicle 10 and the SLAM algorithm of the lidar, it is possible to estimate the location of the vehicle 10 using only the wheel sensor, even when there is no feature of the lidar. When there is a feature of the lidar, it is possible to calculate the moving distance of the vehicle 10 using the lidar and to correct the moving distance of the vehicle 10 detected by the wheel sensor using the calculated moving distance.
  • lidar distance information may be present in some pixels, but may not be present in other pixels.
  • the pixel in which the lidar distance information is contained moves to another position in the image. If the distance that the vehicle 10 has moved is accurately verified, it is possible to verify the position to which the pixel has moved and to input distance information to the corresponding pixel.
  • the distance information that is corrected in the (t+1) th frame, which is next to the t th frame, according to the movement of the vehicle 10 is a value obtained by subtracting the moving distance information of the vehicle 10 from the distance information of the previous pixel. If the distance information is updated over time with respect to all pixels containing the distance information of the lidar through the above process, the distance information of the lidar in many pixels is updated, thereby generating distance information with respect to the RGB pixel and consequently creating an RGB-D map.
  • RGB-D the ratio of RGB to RGB-D in the image is 50:50, it is possible to produce RGB-D information in almost every RGB by increasing the size of the pixel of the RGB by two to three times.
  • FIG. 12 is a view for explaining the operation of performing sensor fusion using V2X according to the embodiment of the present disclosure.
  • the processor 170 may perform correction using data received through the communication device 220 .
  • the communication device 220 may use a V2X or 5G communication scheme. Since it is possible to verify the location of the preceding vehicle through V2X or 5G communication, the distance value of the depth image may be corrected using the location information of the vehicle 10 received through communication by matching the location of the preceding vehicle to the depth image.
  • the vehicle 10 may be recognized through the images of the cameras 311 and 312 , and the outline of the vehicle 10 may be extracted from the depth image.
  • RGB-D information may be generated using distance information received through the communication device 220 .
  • the method described with reference to FIG. 12 may be applied in terms of safety when it is impossible to update the lookup table due to failure of the lidar or adhesion of foreign substances.
  • FIGS. 13 and 14 are views for explaining the operation of performing high-level fusion and low-level fusion according to the embodiment of the present disclosure.
  • the vehicular electronic device 100 may output two results having redundancy in terms of failure safety.
  • the vehicular electronic device 100 may perform high-level fusion and low-level fusion.
  • the vehicular electronic device 100 may perform fusion using location information of an object detected by the respective sensors (the radar, the lidar, and the camera). Such fusion may be understood as high-level fusion.
  • the vehicular electronic device 100 may perform fusion in the stage of low data (RGB image and distance information) of the sensors (the radar, the lidar, and the camera). Such fusion may be understood as low-level fusion. Thereafter, the vehicular electronic device 100 may detect an object through deep learning.
  • the processor 170 may output the high-level fusion result value and the low-level fusion result value through algorithm sampling without performing synchronization setting thereof (hereinafter referred to as “output 1”).
  • the result value of the high-level sensor fusion may be output every 40 msec.
  • the result value of the low-level sensor fusion may be output every 25 msec.
  • the processor 170 may receive and determine the result value of the high-level sensor fusion and the result value of the low-level sensor fusion in order to use a result value that is more suitable for control or to predict a system error when the two result values are different.
  • the processor 170 may output the high-level fusion result value and the low-level fusion result value through synchronization setting thereof (hereinafter referred to as “output 2”).
  • output 2 the result value of the high-level sensor fusion and the result value of the low-level sensor fusion may be output every 30 msec.
  • the method of output 1 imposes a large load but has an advantage of being capable of building a relatively safe system.
  • the method of output 2 imposes a small load and is capable of detecting objects using result values of various algorithms.
  • the aforementioned present disclosure may be implemented as computer-readable code stored on a computer-readable recording medium.
  • the computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Examples of the computer-readable recording medium include a Hard Disk Drive (HDD), a Solid-State Disk (SSD), a Silicon Disk Drive (SDD), Read-Only Memory (ROM), Random-Access Memory (RAM), CD-ROM, magnetic tapes, floppy disks, optical data storage devices, carrier waves (e.g. transmission via the Internet), etc.
  • the computer may include a processor and a controller.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

Disclosed is an electronic device for a vehicle, including a processor receiving first image data from a first camera, receiving second image data from a second camera, receiving first sensing data from a first lidar, generating a depth image based on the first image data and the second image data, and fusing the first sensing data for each of divided regions in the depth image.

Description

    TECHNICAL FIELD
  • The present disclosure relates to an electronic device for a vehicle and a method of operating an electronic device for a vehicle.
  • BACKGROUND ART
  • A vehicle is an apparatus that carries a passenger in the direction intended by the passenger. A car is the main example of such a vehicle. An autonomous vehicle is a vehicle that is capable of traveling autonomously without driving operation by a driver. Such an autonomous vehicle is provided with a plurality of sensors for detecting objects outside the vehicle. Examples of such sensors include a camera, a lidar, a radar, an ultrasonic sensor, and the like.
  • Most sensors for detecting objects outside a vehicle are expensive, and these expensive sensors are not suitable for use in general autonomous vehicles. General autonomous vehicles require sensors that are capable of being applied thereto at low cost while providing performance similar to that of expensive sensors.
  • DISCLOSURE Technical Problem
  • Therefore, the present disclosure has been made in view of the above problems, and it is an object of the present disclosure to provide an electronic device for a vehicle that performs low-level sensor fusion.
  • It is another object of the present disclosure to provide an electronic device for a vehicle that includes a camera and a lidar and performs low-level sensor fusion to combine data generated by the camera with data generated by the lidar.
  • However, the objects to be accomplished by the disclosure are not limited to the above-mentioned objects, and other objects not mentioned herein will be clearly understood by those skilled in the art from the following description.
  • Technical Solution
  • In accordance with an aspect of the present disclosure, the above and other objects can be accomplished by the provision of an electronic device for a vehicle, including a processor receiving first image data from a first camera, receiving second image data from a second camera, receiving first sensing data from a first lidar, generating a depth image based on the first image data and the second image data, and performing fusion of the first sensing data for each of divided regions in the depth image.
  • In accordance with another aspect of the present disclosure, there is provided an electronic device for a vehicle, including a first sensing device mounted in a vehicle, the first sensing device including a first camera generating first image data and a first lidar generating first sensing data, a second sensing device mounted in the vehicle so as to be spaced apart from the first sensing device, the second sensing device including a second camera generating second image data and a second lidar generating second sensing data, and at least one processor generating a depth image based on the first image data and the second image data and performing fusion of the first sensing data and the second sensing data for each of divided regions in the depth image.
  • Details of other embodiments are included in the detailed description and the accompanying drawings.
  • Advantageous Effects
  • According to the present disclosure, there are one or more effects as follows.
  • It is possible to provide a sensor that detects a distance with enhanced accuracy using a low-priced camera and lidar.
  • However, the effects achievable through the disclosure are not limited to the above-mentioned effects, and other effects not mentioned herein will be clearly understood by those skilled in the art from the appended claims.
  • DESCRIPTION OF DRAWINGS
  • FIG. 1 is a view illustrating the external appearance of a vehicle according to an embodiment of the present disclosure.
  • FIG. 2 is a control block diagram of the vehicle according to the embodiment of the present disclosure.
  • FIG. 3 is a control block diagram of an electronic device for a vehicle according to an embodiment of the present disclosure.
  • FIG. 4 is a view for explaining the electronic device for a vehicle according to the embodiment of the present disclosure.
  • FIG. 5 is a view for explaining a first sensing device and a second sensing device according to the embodiment of the present disclosure.
  • FIG. 6 is a view for explaining a lidar according to the embodiment of the present disclosure.
  • FIG. 7 is a view for explaining a camera according to the embodiment of the present disclosure.
  • FIG. 8 is a view for explaining a depth image according to the embodiment of the present disclosure.
  • FIG. 9 is a view for explaining a depth image, which is divided into a plurality of regions, according to the embodiment of the present disclosure.
  • FIG. 10 is a view for explaining a table in which distance values for respective colors are arranged according to the embodiment of the present disclosure.
  • FIG. 11 is a view for explaining the operation of performing sensor fusion using a SLAM algorithm according to the embodiment of the present disclosure.
  • FIG. 12 is a view for explaining the operation of performing sensor fusion using V2X according to the embodiment of the present disclosure.
  • FIGS. 13 and 14 are views for explaining the operation of performing high-level fusion and low-level fusion according to the embodiment of the present disclosure.
  • BEST MODE
  • Reference will now be made in detail to the preferred embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. As used herein, the suffixes “module” and “unit” are added or interchangeably used to facilitate preparation of this specification and are not intended to suggest unique meanings or functions. In describing embodiments disclosed in this specification, a detailed description of relevant well-known technologies may not be given in order not to obscure the subject matter of the present disclosure. In addition, the accompanying drawings are merely intended to facilitate understanding of the embodiments disclosed in this specification and not to restrict the technical spirit of the present disclosure. In addition, the accompanying drawings should be understood as covering all equivalents or substitutions within the scope of the present disclosure.
  • Terms including ordinal numbers such as first, second, etc. may be used to explain various elements. However, it will be appreciated that the elements are not limited to such terms. These terms are merely used to distinguish one element from another.
  • It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to another element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
  • The expression of singularity includes a plural meaning unless the singularity expression is explicitly different in context.
  • It will be further understood that terms such as “include” or “have”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.
  • FIG. 1 is a view illustrating a vehicle according to an embodiment of the present disclosure.
  • Referring to FIG. 1, a vehicle 10 according to an embodiment of the present disclosure is defined as a transportation means that travels on a road or on rails. The vehicle 10 conceptually encompasses cars, trains, and motorcycles. The vehicle 10 may be any of an internal combustion vehicle equipped with an engine as a power source, a hybrid vehicle equipped with an engine and an electric motor as power sources, an electric vehicle equipped with an electric motor as a power source, and the like. The vehicle 10 may be a shared vehicle. The vehicle 10 may be an autonomous vehicle.
  • The vehicle 10 may include an electronic device 100. The electronic device 100 may be a device that generates information about objects outside the vehicle. The information about objects may include information about the presence or absence of an object, information about the location of an object, information about the distance between the vehicle 100 and an object, and information about the relative speed of the vehicle 100 with respect to an object. An Object may include at least one of a lane, another vehicle, a pedestrian, a 2-wheeled vehicle, a traffic signal, a light, a road, a structure, a speed bump, a geographic feature, or an animal.
  • FIG. 2 is a control block diagram of the vehicle according to the embodiment of the present disclosure.
  • Referring to FIG. 2, the vehicle 10 may include a vehicular electronic device 100, a user interface device 200, a vehicular electronic device 100, a communication device 220, a driving operation device 230, a main ECU 240, a vehicle-driving device 250, a traveling system 260, a sensing unit 270, and a location-data-generating device 280.
  • The vehicular electronic device 100 may detect objects outside the vehicle 10. The vehicular electronic device 100 may include at least one sensor capable of detecting objects outside the vehicle 10. The vehicular electronic device 100 may include at least one of a camera, a radar, a lidar, an ultrasonic sensor, or an infrared sensor. The vehicular electronic device 100 may provide data on an object, which is generated based on a sensing signal generated by a sensor, to at least one electronic device included in the vehicle. The vehicular electronic device 100 may be referred to as an object detection device.
  • The user interface device 200 is a device used to enable the vehicle 10 to communicate with a user. The user interface device 200 may receive user input and may provide information generated by the vehicle 10 to the user. The vehicle 10 may implement a User Interface (UI) or a User Experience (UX) through the user interface device 200.
  • The communication device 220 may exchange signals with devices located outside the vehicle 10. The communication device 220 may exchange signals with at least one of infrastructure (e.g. a server or a broadcasting station) or other vehicles. In order to realize communication, the communication device 220 may include at least one of a transmission antenna, a reception antenna, a Radio-Frequency (RF) circuit capable of implementing various communication protocols, or an RF device.
  • The driving operation device 230 is a device that receives user input for driving the vehicle. In the manual mode, the vehicle 10 may travel based on a signal provided by the driving operation device 230. The driving operation device 230 may include a steering input device (e.g. a steering wheel), an acceleration input device (e.g. an accelerator pedal), and a brake input device (e.g. a brake pedal).
  • The main ECU 240 may control the overall operation of at least one electronic device provided in the vehicle 10.
  • The driving control device 250 is a device that electrically controls various vehicle-driving devices provided in the vehicle 10. The driving control device 250 may include a powertrain driving controller, a chassis driving controller, a door/window driving controller, a safety device driving controller, a lamp driving controller, and an air-conditioner driving controller. The powertrain driving controller may include a power source driving controller and a transmission driving controller. The chassis driving controller may include a steering driving controller, a brake driving controller, and a suspension driving controller.
  • The safety device driving controller may include a seat belt driving controller for controlling the seat belt.
  • The vehicle driving control device 250 may be referred to as a control electronic control unit (a control ECU).
  • The traveling system 260 may generate a signal for controlling the movement of the vehicle 10 or outputting information to the user based on the data on an object received from the vehicular electronic device 100. The traveling system 260 may provide the generated signal to at least one of the user interface device 200, the main ECU 240, or the vehicle-driving device 250.
  • The traveling system 260 may conceptually include an Advanced Driver Assistance System (ADAS). The ADAS 260 may implement at least one of Adaptive Cruise Control (ACC), Autonomous Emergency Braking (AEB), Forward Collision Warning (FCW), Lane Keeping Assist (LKA), Lane Change Assist (LCA), Target Following Assist (TFA), Blind Spot Detection (BSD), High Beam Assist (HBA), Auto Parking System (APS), PD collision warning system, Traffic Sign Recognition (TSR), Traffic Sign Assist (TSA), Night Vision (NV), Driver Status Monitoring (DSM), or Traffic Jam Assist (TJA).
  • The traveling system 260 may include an autonomous-driving electronic control unit (an autonomous-driving ECU). The autonomous-driving ECU may set an autonomous-driving route based on data received from at least one of the other electronic devices provided in the vehicle 10. The autonomous-driving ECU may set an autonomous-driving route based on data received from at least one of the user interface device 200, the vehicular electronic device 100, the communication device 220, the sensing unit 270, or the location-data-generating device 280. The autonomous-driving ECU may generate a control signal so that the vehicle 10 travels along the autonomous-driving route. The control signal generated by the autonomous-driving ECU may be provided to at least one of the main ECU 240 or the vehicle-driving device 250.
  • The sensing unit 270 may sense the state of the vehicle. The sensing unit 270 may include at least one of an inertial navigation unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, an inclination sensor, a weight detection sensor, a heading sensor, a position module, a vehicle forward/reverse movement sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor for detecting rotation of the steering wheel, a vehicle internal temperature sensor, a vehicle internal humidity sensor, an ultrasonic sensor, an illuminance sensor, an accelerator pedal position sensor, or a brake pedal position sensor. The inertial navigation unit (IMU) sensor may include at least one of an acceleration sensor, a gyro sensor, or a magnetic sensor.
  • The sensing unit 270 may generate data on the state of the vehicle based on the signal generated by at least one sensor. The sensing unit 270 may acquire sensing signals of vehicle attitude information, vehicle motion information, vehicle yaw information, vehicle roll information, vehicle pitch information, vehicle collision information, vehicle heading information, vehicle angle information, vehicle speed information, vehicle acceleration information, vehicle inclination information, vehicle forward/reverse movement information, battery information, fuel information, tire information, vehicle lamp information, vehicle internal temperature information, vehicle internal humidity information, a steering wheel rotation angle, vehicle external illuminance, the pressure applied to the accelerator pedal, the pressure applied to the brake pedal, and so on.
  • The sensing unit 270 may further include an accelerator pedal sensor, a pressure sensor, an engine speed sensor, an air flow sensor (AFS), an air temperature sensor (ATS), a water temperature sensor (WTS), a throttle position sensor (TPS), a top dead center (TDC) sensor, a crank angle sensor (CAS), and so on.
  • The sensing unit 270 may generate vehicle state information based on the sensing data. The vehicle state information may be generated based on data detected by various sensors included in the vehicle.
  • For example, the vehicle state information may include vehicle attitude information, vehicle speed information, vehicle inclination information, vehicle weight information, vehicle heading information, vehicle battery information, vehicle fuel information, vehicle tire air pressure information, vehicle steering information, vehicle internal temperature information, vehicle internal humidity information, pedal position information, vehicle engine temperature information, and so on.
  • The sensing unit may include a tension sensor. The tension sensor may generate a sensing signal based on the tension state of the seat belt.
  • The location-data-generating device 280 may generate data on the location of the vehicle 10. The location-data-generating device 280 may include at least one of a global positioning system (GPS) or a differential global positioning system (DGPS). The location-data-generating device 280 may generate data on the location of the vehicle 10 based on the signal generated by at least one of the GPS or the DGPS. In some embodiments, the location-data-generating device 280 may correct the location data based on at least one of the inertial measurement unit (IMU) of the sensing unit 270 or the camera of the vehicular electronic device 100.
  • The location-data-generating device 280 may be referred to as a location positioning device. The location-data-generating device 280 may be referred to as a global navigation satellite system (GNSS).
  • The vehicle 10 may include an internal communication system 50. The electronic devices included in the vehicle 10 may exchange signals via the internal communication system 50. The signals may include data. The internal communication system 50 may use at least one communication protocol (e.g. CAN, LIN, FlexRay, MOST, and Ethernet).
  • FIG. 3 is a control block diagram of the electronic device according to the embodiment of the present disclosure.
  • Referring to FIG. 3, the electronic device 100 may include a memory 140, a processor 170, an interface unit 180, and a power supply unit 190.
  • The memory 140 is electrically connected to the processor 170. The memory 140 may store default data for a unit, control data for controlling the operation of the unit, and input and output data. The memory 140 may store data processed by the processor 170. The memory 140 may be implemented as at least one hardware device selected from among read only memory (ROM), random access memory (RAM), erasable and programmable ROM (EPROM), a flash drive, or a hard drive. The memory 140 may store various data for the overall operation of the electronic device 100, such as programs for processing or control in the processor 170. The memory 140 may be integrated with the processor 170. In some embodiments, the memory 140 may be configured as a lower-level component of the processor 170.
  • The interface unit 180 may exchange signals with at least one electronic device provided in the vehicle 10 in a wired or wireless manner. The interface unit 280 may exchange signals with at least one of the vehicular electronic device 100, the communication device 220, the driving operation device 230, the main ECU 140, the vehicle-driving device 250, the ADAS 260, the sensing unit 170, or the location-data-generating device 280 in a wired or wireless manner. The interface unit 280 may be configured as at least one of a communication module, a terminal, a pin, a cable, a port, a circuit, an element, or a device.
  • The interface unit 180 may receive location data of the vehicle 10 from the location-data-generating device 280. The interface unit 180 may receive travel speed data from the sensing unit 270. The interface unit 180 may receive data on objects around the vehicle from the vehicular electronic device 100.
  • The power supply unit 190 may supply power to the electronic device 100. The power supply unit 190 may receive power from a power source (e.g. a battery) included in the vehicle 10, and may supply the power to each unit of the electronic device 100. The power supply unit 190 may be operated in response to a control signal from the main ECU 140. The power supply unit 190 may be implemented as a switched-mode power supply (SMPS).
  • The processor 170 may be electrically connected to the memory 140, the interface unit 280, and the power supply unit 190, and may exchange signals with the same. The processor 170 may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, or electrical units for performing other functions.
  • The processor 170 may be driven by the power supplied from the power supply unit 190. The processor 170 may receive data, process data, generate a signal, and provide a signal while receiving the power from the power supply unit 190.
  • The processor 170 may receive information from other electronic devices in the vehicle 10 through the interface unit 180. The processor 170 may provide a control signal to other electronic devices in the vehicle 10 through the interface unit 180.
  • The processor 170 may fuse sensing data, received from various types of sensors, at a low level. The processor 170 may fuse the image data received from the camera and the sensing data received from the lidar at a low level.
  • The processor 170 may generate a depth image based on the image data photographed by stereo cameras (a first camera and a second camera). The processor 170 may receive first image data from the first camera. The first camera may be mounted in the vehicle 10 and may capture an image of the surroundings of the vehicle 10. The processor 170 may receive second image data from the second camera. The second camera may be mounted in the vehicle 10 at a position different from that of the first camera and may capture an image of the surroundings of the vehicle 10. The second camera may be oriented in the same direction as the first camera. For example, both the first camera and the second camera may be oriented in the forward direction of the vehicle 10. For example, both the first camera and the second camera may be oriented in the backward direction of the vehicle 10. For example, both the first camera and the second camera may be oriented in the leftward direction of the vehicle 10. For example, both the first camera and the second camera may be oriented in the rightward direction of the vehicle 10.
  • The processor 170 may receive first sensing data from a first lidar. The first lidar may be mounted in the vehicle 10, may emit a laser signal toward the outside of the vehicle 10, and may receive a laser signal reflected by an object. The first sensing data may be data that is generated based on an emitted laser signal and a received laser signal.
  • The processor 170 may generate a depth image based on the first image data and the second image data. The processor 170 may detect a disparity based on the first image data and the second image data. The processor 170 may generate a depth image based on information about the disparity. The depth image may be configured as a red-green-blue (RGB) image. The depth image may be understood as a depth map.
  • In some embodiments, the processor 170 may generate a depth image based on image data photographed by a mono camera. The processor 170 may receive first image data photographed at a first time point from the first camera. The processor 170 may receive second image data photographed at a second time point, which is different from the first time point, from the first camera. The location of the first camera at the second time point may be different from the location of the first camera at the first time point. The processor 170 may generate a depth image based on the first image data and the second image data received from the first camera.
  • The processor 170 may detect an object based on the depth image. For example, the processor 170 may detect, based on the depth image, at least one of a lane, another vehicle, a pedestrian, a 2-wheeled vehicle, a traffic signal, a light, a road, a structure, a speed bump, a geographic feature, or an animal.
  • The processor 170 may fuse first sensing data received from the first lidar into each region divided in the depth image. The processor 170 may apply the first sensing data to the depth image. By applying the first sensing data to the depth image, a more accurate value of the distance to the object may be acquired than when using only the depth image.
  • The processor 170 may acquire RGB-level data for each region of the depth image. The processor 170 may refer to a table stored in the memory 140. The table may be a table in which distance values for RGB levels are arranged. The processor 170 may acquire a distance value for each region of the depth image based on the table. The region may be composed of one or a plurality of pixels. The table may be acquired through experimentation and may be stored in advance in the memory 140.
  • The processor 170 may divide the depth image into a plurality of regions, each of which has a first area. The processor 170 may acquire RGB-level data for each divided region. The processor 170 may divide the depth image such that each of the plurality of regions has a size in which two or three beam points of the first lidar are formed. The two or three beam points of the first lidar may match the first area. The processor 170 may acquire a distance value corresponding to the RGB level for each of the divided regions from the table.
  • The processor 170 may divide the depth image into a plurality of first regions, each having a first area, and a plurality of second regions, each having a second area, which is larger than the first area. The processor 170 may acquire RGB-level data for each of the divided regions. The processor 170 may acquire a distance value corresponding to the RGB level for each of the divided regions from the table.
  • The processor 170 may set a region of interest in the depth image. The region of interest may be defined as a region in which the probability that an object is located is high. The processor 170 may divide the region of interest into the plurality of first regions. The processor 170 may set a region, other than the region of interest, as the plurality of second regions in the depth image. The region, other than the region of interest, may be the upper region of the image, which corresponds to the sky, or the lower region of the image, which corresponds to the ground.
  • The processor 170 may fuse the depth image and the first sensing data using a simultaneous localization and mapping (SLAM) algorithm. The processor 170 may receive first image data photographed at a first time point from the first camera. The processor 170 may receive second image data photographed at the first time point from the second camera. The processor 170 may receive first sensing data acquired at the first time point from the first lidar. The processor 170 may generate a depth image of the first time point based on the first image data photographed at the first time point and the second image data photographed at the first time point. The processor 170 may fuse the first sensing data acquired at the first time point for each of the divided regions in the depth image captured at the first time point. The processor 170 may receive first image data photographed at a second time point from the first camera. The processor 170 may receive second image data photographed at the second time point from the second camera. The processor 170 may generate a depth image of the second time point based on the first image data photographed at the second time point and the second image data photographed at the second time point. The processor 170 may receive first sensing data acquired at the second time point from the first lidar. The processor 170 may fuse the first sensing data acquired at the second time point for each of the divided regions in the depth image of the first time point. The processor 170 may fuse the first sensing data acquired at the second time point for each of the divided regions in the depth image of the second time point. The processor 170 may acquire a moving distance value of the vehicle 10 from the first time point to the second time point. For example, the processor 170 may acquire a moving distance value from the first time point to the second time point based on wheel data received from the wheel sensor of the vehicle 10. For example, the processor 170 may acquire a moving distance value from the first time point to the second time point based on location data generated by the location-data-generating device 280. For example, the processor 170 may acquire a moving distance value from the first time point to the second time point based on a sensing value of the object detection sensor (e.g. a camera, a radar, a lidar, an ultrasonic sensor, an infrared sensor, etc.). The processor 170 may apply the moving distance value to the depth image captured at the first time point, and may fuse the first sensing data acquired at the second time point for the depth image.
  • The processor 170 may receive location data of another vehicle through the communication device 220. The processor 170 may detect an object corresponding to the other vehicle from the depth image. The processor 170 may correct the value of the distance between the vehicle 10 and the other vehicle based on the location data.
  • The processor 170 may receive second sensing data from the second lidar. The second lidar may be mounted in the vehicle 10 at a position different from that of the first lidar, may emit a laser signal toward the outside of the vehicle 10, and may receive a laser signal reflected by an object. The second lidar may be oriented in the same direction as the first lidar. For example, both the first lidar and the second lidar may be oriented in the forward direction of the vehicle 10. For example, both the first lidar and the second lidar may be oriented in the backward direction of the vehicle 10. For example, both the first lidar and the second lidar may be oriented in the leftward direction of the vehicle 10. For example, both the first lidar and the second lidar may be oriented in the rightward direction of the vehicle 10. The processor 170 may further fuse the second sensing data for each of the divided regions in the depth image. By using the first lidar and the second lidar together, it is possible to obtain the effect of expanding the channel of the lidar.
  • The electronic device 100 may include at least one printed circuit board (PCB). The memory 140, the interface unit 180, the power supply unit 190, and the processor 170 may be electrically connected to the printed circuit board.
  • FIG. 4 is a view for explaining the electronic device for a vehicle according to the embodiment of the present disclosure.
  • Referring to FIG. 4, the vehicular electronic device 100 may include a first sensing device 301, a second sensing device 302, a memory 140, a processor 170, and an interface unit 180. Although not shown in FIG. 4, the vehicular electronic device 100 may further include a power supply unit.
  • The first sensing device 301 may be mounted in the vehicle 10. The first sensing device 301 may be mounted in the vehicle 10 so as to enable adjustment of the posture thereof. The first sensing device 301 may enable adjustment of the posture thereof in roll, pitch, and yaw directions. The first sensing device 301 may include a first camera 311 for generating first image data and a first lidar 321 for generating first sensing data. The first sensing device 301 may be implemented in a modular form in which the first camera 311 and the first lidar 321 are coupled while being disposed side by side. The description of the first camera and the first lidar made with reference to FIG. 3 may be applied to the first camera 311 and the first lidar 321. Each of the first camera 311 and the first lidar 321 may enable adjustment of the posture thereof in roll, pitch, and yaw directions.
  • The first sensing device 301 may be mounted in the vehicle 10 at the same height as the second sensing device 302 with respect to the ground.
  • The first camera 311 may be mounted in the vehicle 10 such that a line connecting a first principal point of a first image acquired by the first camera 311 and a second principal point of a second image acquired by the second camera 312 is parallel to a horizontal line.
  • The second sensing device 302 may be mounted in the vehicle 10 so as to be spaced apart from the first sensing device 301. The second sensing device 302 may be mounted in the vehicle 10 so as to enable adjustment of the posture thereof. The second sensing device 302 may enable adjustment of the posture thereof in roll, pitch, and yaw directions. The second sensing device 302 may include a second camera 312 for generating second image data and a second lidar 322 for generating second sensing data. The second sensing device 302 may be implemented in a modular form in which the second camera 312 and the second lidar 322 are coupled while being disposed side by side. The description of the second camera and the second lidar made with reference to FIG. 3 may be applied to the second camera 312 and the second lidar 322. Each of the second camera 312 and the second lidar 322 may enable adjustment of the posture thereof in roll, pitch, and yaw directions.
  • The second camera 312 may be mounted in the vehicle 10 such that a line connecting a second principal point of a second image acquired by the second camera 312 and a first principal point of a first image acquired by the first camera 311 is parallel to a horizontal line.
  • The second sensing device 302 may be mounted in the vehicle 10 at the same height as the first sensing device 301 with respect to the ground.
  • The description made with reference to FIG. 3 may be applied to the memory 140. The description made with reference to FIG. 3 may be applied to the interface unit 180. The description made with reference to FIG. 3 may be applied to the power supply unit.
  • The description made with reference to FIG. 3 may be applied to the processor 170. The processor 170 may generate a depth image based on first image data and second image data. The processor 170 may fuse the first sensing data and the second sensing data for each of the divided regions in the depth image.
  • Hereinafter, a method of operating the vehicular electronic device will be described.
  • The processor 170 may receive first image data from the first camera 311 (S410). The processor 170 may receive second image data from the second camera 312 (S415). The processor 170 may receive first sensing data from the first lidar 321 (S420). The processor 170 may receive second sensing data from the second lidar 322 (S425).
  • The processor 170 may generate a depth image based on the first image data and the second image data (S430). The processor 170 may fuse the first sensing data for each of the divided regions in the depth image (S435).
  • The fusing step S435 may include acquiring, by at least one processor, red-green-blue (RGB)-level data for each region of the depth image, and acquiring, by the at least one processor, a distance value for each region of the depth image based on the table in which distance values for RGB levels are arranged.
  • The fusing step S435 may include dividing, by the at least one processor, the depth image into a plurality of regions, each having a first area, to acquire RGB-level data for each of the divided regions, and acquiring, by the at least one processor, a distance value corresponding to the RGB level for each of the divided regions from the table.
  • The fusing step S435 may include dividing, by the at least one processor, the depth image such that each of the plurality of regions has a size in which two or three beam points of the first lidar are formed.
  • The fusing step S435 may include dividing, by the at least one processor, the depth image into a plurality of first regions, each having a first area, and a plurality of second regions, each having a second area, which is larger than the first area, to acquire RGB-level data for each of the divided regions, and acquiring, by the at least one processor, a distance value corresponding to the RGB level for each of the divided regions from the table.
  • The fusing step S435 may include setting, by the at least one processor, a region of interest in the depth image to divide the region of interest into a plurality of first regions, and dividing, by the at least one processor, a region, other than the region of interest, into a plurality of second regions in the depth image.
  • The fusing step S435 may include fusing, by the at least one processor, the depth image and the first sensing data using a simultaneous localization and mapping (SLAM) algorithm.
  • The fusing step S435 may include receiving, by the at least one processor, first image data photographed at a first time point from the first camera, receiving, by the at least one processor, second image data photographed at the first time point from the second camera, receiving, by the at least one processor, first sensing data acquired at the first time point from the first lidar, generating, by the at least one processor, a depth image of the first time point based on the first image data photographed at the first time point and the second image data photographed at the first time point, and fusing, by the at least one processor, the first sensing data acquired at the first time point for each of the divided regions in the depth image captured at the first time point. The fusing step S435 may further include receiving, by the at least one processor, first image data photographed at a second time point from the first camera, receiving, by the at least one processor, second image data photographed at the second time point from the second camera, and generating, by the at least one processor, a depth image of the second time point based on the first image data photographed at the second time point and the second image data photographed at the second time point. The fusing step (S435) may further include receiving, by the at least one processor, first sensing data acquired at the second time point from the first lidar, fusing, by the at least one processor, the first sensing data acquired at the second time point for each of the divided regions in the depth image of the first time point, and fusing, by the at least one processor, the first sensing data acquired at the second time point for each of the divided regions in the depth image of the second time point. The fusing step S435 may further include acquiring, by the at least one processor, a moving distance value of the vehicle from the first time point to the second time point, and applying, by the at least one processor, the moving distance value to the depth image captured at the first time point to fuse the first sensing data acquired at the second time point into the depth image.
  • The fusing step S435 may include receiving, by the at least one processor, location data of another vehicle through the communication device, and detecting, by the at least one processor, an object corresponding to the other vehicle from the depth image to correct the value of the distance between the vehicle and the other vehicle based on the location data.
  • The fusing step S435 may further include receiving, by the at least one processor, second sensing data from the second lidar, and fusing, by the at least one processor, the second sensing data for each of the divided regions in the depth image.
  • FIG. 5 is a view for explaining the first sensing device and the second sensing device according to the embodiment of the present disclosure.
  • A conventional object detection device determines the type of an object through deep learning of a depth image of an RGB image. The object detection device using an image has lower accuracy with respect to a distance to an object than when using another sensor. The sensing devices 301 and 302 according to the present disclosure may generate an RGB-D map by mapping distance information from the lidar to RGB-pixel information, and may perform deep learning using the RGB-D map information, thereby accurately detecting information about the distance to the object. To this end, in addition to the above-described operation, the processor 170 may further perform generation of an RGB-D map and deep learning based on the RGB-D map. The above-described method of detecting an object by fusing raw data of distance information into an image may be described as low-level sensor fusion.
  • Meanwhile, a general vehicle may include a low-priced lidar. A lidar that is manufactured at low cost may have a relatively small number of layers. For example, each of the first lidar 321 and the second lidar 322 may have four layers. Since the layers of such a lidar are not disposed densely, distance information with respect to pixels of an image may not be accurate.
  • The electronic device for a vehicle according to the present disclosure may implement precise RBG-D by fusing sensing data of the lidar into a depth image.
  • Referring to FIG. 5, the first sensing device 301 may include a first camera 311 and a first lidar 321. The second sensing device 302 may include a second camera 312 and a second lidar 322.
  • The first sensing device 301 and the second sensing device 302 may be mounted in the vehicle 10. For example, the first and second sensing devices 301 and 302 may be mounted in at least one of a bumper, a portion of the inner side of a windshield (the side oriented toward a cabin), a headlamp, a side mirror, a radiator grill, or a roof. When the first sensing device 301 and the second sensing device 302 are mounted in the inner side of the windshield, this is advantageous in that a separate device for removing foreign substances is not necessary due to a wiper. When the first sensing device 301 and the second sensing device 302 are mounted in the roof, this is advantageous in terms of heat dissipation. In addition, when the first sensing device 310 and the second sensing device 302 are mounted in the roof, this is advantageous in terms of an increase in sensing distance. When the first sensing device 301 and the second sensing device 302 are mounted in the headlamp or the radiator grill, they may not be exposed outside and thus may provide an aesthetic improvement. When the first sensing device 301 and the second sensing device 302 are mounted in the side mirror, this is advantageous in terms of linkage with other sensors mounted in the side mirror.
  • When the mounting distance between the first sensing device 301 and the second sensing device 302 is longer, the sensing distance thereof may increase.
  • The first sensing device 301 may be mounted at the same height as the second sensing device 302. Since the first sensing device 301 and the second sensing device 302 are mounted at the same height, a depth image may be generated. Since the first sensing device 301 and the second sensing device 302 are mounted at the same height, sensing data acquired from the first lidar 321 and the second lidar 322 may be matched to the depth image.
  • FIG. 6 is a view for explaining the lidar according to the embodiment of the present disclosure.
  • Referring to FIG. 6, each of the first lidar 321 and the second lidar 322 may have N channel layers. When the first sensing device 301 and the second sensing device 302 are mounted in the vehicle 10, first sensing data of the first lidar 321 and second sensing data of the second lidar 322 may be used. In this case, the lidars may operate like a lidar having 2N channel layers. That is, the first lidar 321 and the second lidar 322 may operate as a lidar capable of realizing 2N-point vertical detection.
  • FIG. 7 is a view for explaining the camera according to the embodiment of the present disclosure.
  • Referring to FIG. 7, the first camera 311 may be disposed parallel to the second camera 312. A first image sensor of the first camera 311 may be disposed parallel to a second image sensor of the second camera 312. The first camera 311 and the second camera 312 may be mounted in the vehicle 10 such that a line connecting a first principal point of a first image acquired by the first camera 311 and a second principal point of a second image acquired by the second camera 312 is parallel to a horizontal line in each of the first image and the second image. Alternatively, the first camera 311 and the second camera 312 may be mounted in the vehicle 10 such that a line connecting a first principal point of a first image acquired by the first camera 311 and a second principal point of a second image acquired by the second camera 312 is parallel to the ground.
  • When the distance between the first camera 311 and the second camera 312 is longer, a sensing distance increases, which is advantageous in terms of long-distance sensing. When the distance between the first camera 311 and the second camera 312 is shorter, a sensing distance decreases, which is advantageous in terms of near field sensing.
  • The operation of the vehicular electronic device will be described with reference to FIGS. 8 to 14. The operation to be described below may be performed by the processor of the vehicular electronic device.
  • FIG. 8 is a view for explaining a depth image according to the embodiment of the present disclosure.
  • Referring to FIG. 8, reference numeral 810 denotes an image captured by the first camera 311 or the second camera 312. Reference numeral 820 denotes a depth image generated using a first image of the first camera 311 and a second image of the second camera 312. It is possible to verify a relative location difference for each pixel in the depth image, but an error may occur in the distance between the vehicle 10 and an object 811. This error may be compensated for using at least one of first sensing data of the first lidar 321 or second sensing data of the second lidar 322. By fusing the sensing data of the lidar into the depth image, it is possible to more accurately detect an object using detection and classification of the object, distance information of the lidar, and reflection intensity information.
  • The result generated by fusing the sensing data of the lidar into the depth image data may be referred to as RGB-D data. “RGB” is a value corresponding to red, green, and blue of an image. “D” may include distance information and reflection intensity information based on the sensing data of the lidar.
  • FIG. 9 is a view for explaining a depth image, which is divided into a plurality of regions, according to the embodiment of the present disclosure.
  • Referring to FIG. 9, the processor 170 may correct distance information using a lookup table in order to create an RGB-D map. In the depth image, color represents distance. If the colors are the same for each RGB within a similar region range in the depth image, it is possible to perform correction using sensing data (distance information) of the lidar regardless of time.
  • The processor 170 may divide the depth image into a plurality of regions (M*N) having a first area, as indicated by reference numeral 910. The processor 170 may divide the depth image into regions such that two or three points of the lidar are formed in one region 911 of the divided regions.
  • The processor 170 may divide the depth image into regions 921, 922 and 923 having different sizes, as indicated by reference numeral 920. The processor 170 may divide the depth image into a plurality of first regions 921, each having a first area, a plurality of second regions 922, each having a second area, which is larger than the first area, and a plurality of third regions 923, each having a third area, which is larger than the second area. The amount of calculation may be reduced by differentiating the sizes of the divided areas depending on the region of interest.
  • If A is a sampling time, when t=0, the RGB value (the RBG value in the depth image is a distance value) of a pixel, which corresponds to a point cloud of the lidar, may be obtained by selecting one region from the divided regions in the depth image, thereby creating a lookup table that stores sensing data (a distance value) of the lidar corresponding to the RGB value.
  • The distance information of the lidar corresponding to the RGB value may be updated by repeating the process when t=A sec and when t=2A sec. By repeating this process, it is possible to create a lookup table that has distance information of the lidar corresponding to the color of the depth image in one region of the depth image. If this process is performed for all regions, it is possible to obtain lidar correction information corresponding to the distance information of the depth image in all regions. An RGB-D map may be created by adding the distance information of the lidar obtained with respect to each pixel to the RGB image data of the camera. RGB-D may be defined as information in which distance information is included in image information. In addition to distance information, intensity or other additional information of the lidar may be assigned to D.
  • The lookup table may be created before the vehicle 10 is released from a factory, and the vehicle 10 may update the lookup table according to the settings of the vehicle while traveling.
  • FIG. 10 is a view for explaining a table in which distance values for respective colors are arranged according to the embodiment of the present disclosure.
  • Referring to FIG. 10, when the lidar correction lookup table for the depth image is completed, accurate location information of the camera may be corrected without the help of the lidar. When the RGB information of the colors of the depth image can be expressed up to (0 to 255, 0 to 255, 0 to 255), the number of R, G and B combinations is 256, and thus it is required to create a large number of lookup tables, which may cause a problem in storage and processing of data.
  • It may be more efficient from the aspect of memory usage to convert the R, G and B sizes of the depth image small when creating the lookup table. If a small depth image such as one having, for example, 4 bits is formed, the lookup table may be created in a (0 to 15, 0 to 15, 0 to 15) form, as indicated by reference numeral 1010. Alternatively, as indicated by reference numeral 1020, if a single-color image is formed among red, green and blue and thus 256 distance values are obtained, it is possible to simply express the values using 256 lookup tables.
  • If the size of the lookup table is estimated according to the resolution of the lidar and the RGB bit of the depth image is set according thereto, it is possible to generate a lookup table having an appropriate size.
  • FIG. 11 is a view for explaining the operation of performing sensor fusion using a SLAM algorithm according to the embodiment of the present disclosure.
  • Referring to FIG. 11, in order to form an RGB-D, the processor 170 may measure the moving distance of the vehicle 10 using a SLAM and may estimate the location to which the pixel of the image has moved using the measured distance. It is possible to create a map of the surroundings of the vehicle by applying a SLAM algorithm of the lidar and to accurately estimate the distance that the vehicle 10 has recently moved using the feature of the map. If using the wheel sensor of the vehicle 10 and the SLAM algorithm of the lidar, it is possible to estimate the location of the vehicle 10 using only the wheel sensor, even when there is no feature of the lidar. When there is a feature of the lidar, it is possible to calculate the moving distance of the vehicle 10 using the lidar and to correct the moving distance of the vehicle 10 detected by the wheel sensor using the calculated moving distance.
  • Because the point cloud information of the lidar is not dense, lidar distance information may be present in some pixels, but may not be present in other pixels. When the vehicle 10 moves forwards, the pixel in which the lidar distance information is contained moves to another position in the image. If the distance that the vehicle 10 has moved is accurately verified, it is possible to verify the position to which the pixel has moved and to input distance information to the corresponding pixel.
  • Referring to FIG. 11, when lidar information and RGB information corresponding thereto are present in the tth frame, the distance information that is corrected in the (t+1)th frame, which is next to the tth frame, according to the movement of the vehicle 10 is a value obtained by subtracting the moving distance information of the vehicle 10 from the distance information of the previous pixel. If the distance information is updated over time with respect to all pixels containing the distance information of the lidar through the above process, the distance information of the lidar in many pixels is updated, thereby generating distance information with respect to the RGB pixel and consequently creating an RGB-D map.
  • If the ratio of RGB to RGB-D in the image is 50:50, it is possible to produce RGB-D information in almost every RGB by increasing the size of the pixel of the RGB by two to three times.
  • FIG. 12 is a view for explaining the operation of performing sensor fusion using V2X according to the embodiment of the present disclosure.
  • Referring to FIG. 12, in order to form an RGB-D map, the processor 170 may perform correction using data received through the communication device 220. In this case, the communication device 220 may use a V2X or 5G communication scheme. Since it is possible to verify the location of the preceding vehicle through V2X or 5G communication, the distance value of the depth image may be corrected using the location information of the vehicle 10 received through communication by matching the location of the preceding vehicle to the depth image. The vehicle 10 may be recognized through the images of the cameras 311 and 312, and the outline of the vehicle 10 may be extracted from the depth image. RGB-D information may be generated using distance information received through the communication device 220. The method described with reference to FIG. 12 may be applied in terms of safety when it is impossible to update the lookup table due to failure of the lidar or adhesion of foreign substances.
  • FIGS. 13 and 14 are views for explaining the operation of performing high-level fusion and low-level fusion according to the embodiment of the present disclosure.
  • Referring to FIG. 13, the vehicular electronic device 100 may output two results having redundancy in terms of failure safety. In detail, the vehicular electronic device 100 may perform high-level fusion and low-level fusion.
  • The vehicular electronic device 100 may perform fusion using location information of an object detected by the respective sensors (the radar, the lidar, and the camera). Such fusion may be understood as high-level fusion. The vehicular electronic device 100 may perform fusion in the stage of low data (RGB image and distance information) of the sensors (the radar, the lidar, and the camera). Such fusion may be understood as low-level fusion. Thereafter, the vehicular electronic device 100 may detect an object through deep learning.
  • Referring to FIG. 14, the processor 170 may output the high-level fusion result value and the low-level fusion result value through algorithm sampling without performing synchronization setting thereof (hereinafter referred to as “output 1”). For example, the result value of the high-level sensor fusion may be output every 40 msec. For example, the result value of the low-level sensor fusion may be output every 25 msec. The processor 170 may receive and determine the result value of the high-level sensor fusion and the result value of the low-level sensor fusion in order to use a result value that is more suitable for control or to predict a system error when the two result values are different.
  • The processor 170 may output the high-level fusion result value and the low-level fusion result value through synchronization setting thereof (hereinafter referred to as “output 2”). For example, the result value of the high-level sensor fusion and the result value of the low-level sensor fusion may be output every 30 msec.
  • The method of output 1 imposes a large load but has an advantage of being capable of building a relatively safe system. The method of output 2 imposes a small load and is capable of detecting objects using result values of various algorithms.
  • The aforementioned present disclosure may be implemented as computer-readable code stored on a computer-readable recording medium. The computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Examples of the computer-readable recording medium include a Hard Disk Drive (HDD), a Solid-State Disk (SSD), a Silicon Disk Drive (SDD), Read-Only Memory (ROM), Random-Access Memory (RAM), CD-ROM, magnetic tapes, floppy disks, optical data storage devices, carrier waves (e.g. transmission via the Internet), etc. In addition, the computer may include a processor and a controller. The above embodiments are therefore to be construed in all aspects as illustrative and not restrictive. It is intended that the present disclosure cover the modifications and variations of this disclosure provided they come within the scope of the appended claims and their equivalents.
  • DESCRIPTION OF REFERENCE NUMERALS
      • 10: vehicle
      • 100: vehicular electronic device

Claims (20)

1. An electronic device for a vehicle, comprising:
a processor configured to:
receive first image data from a first camera,
receive second image data from the first camera or a second camera,
receive first sensing data from a first lidar,
generate a depth image based on the first image data and the second image data, and
perform fusion of the first sensing data for each of divided regions in the depth image.
2. The electronic device of claim 1, wherein the processor is configured to:
acquire red-green-blue (RGB)-level data for each region of the depth image, and
acquire a distance value for each region of the depth image based on a table in which distance values for RGB-levels are arranged.
3. The electronic device of claim 2, wherein the processor is configured to:
divide the depth image into a plurality of regions, each of the regions having a first area,
acquire RGB-level data for each of divided regions, and
acquire a distance value corresponding to an RGB-level for each of the divided regions in the table.
4. The electronic device of claim 3, wherein the processor is configured to divide the depth image such that each of the plurality of regions has a size in which two or three beam points of the first lidar are formed.
5. The electronic device of claim 2, wherein the processor is configured to:
divide the depth image into a plurality of first regions, each of the first regions having a first area, and a plurality of second regions, each of the second regions having a second area, which is larger than the first area,
acquire RGB-level data for each of divided regions, and
acquire a distance value corresponding to an RGB level for each of the divided regions from the table.
6. The electronic device of claim 5, wherein the processor is configured to:
set a region of interest in the depth image, divide the region of interest into the plurality of first regions, and
set a region except the region of interest as the plurality of second regions in the depth image.
7. The electronic device of claim 1, wherein the processor is configured to perform fusion of the depth image and the first sensing data using a simultaneous localization and mapping (SLAM) algorithm.
8. The electronic device of claim 7, wherein the processor is configured to:
receive first image data photographed at a first time point from the first camera,
receive second image data photographed at the first time point from the second camera,
receive first sensing data acquired at the first time point from the first lidar,
generate a depth image of the first time point based on the first image data photographed at the first time point and the second image data photographed at the first time point, and
perform fusion of the first sensing data acquired at the first time point for each of divided regions in the depth image captured at the first time point.
9. The electronic device of claim 8, wherein the processor is configured to:
receive first image data photographed at a second time point from the first camera,
receive second image data photographed at the second time point from the second camera, and
generate a depth image of the second time point based on the first image data photographed at the second time point and the second image data photographed at the second time point.
10. The electronic device of claim 9, wherein the processor is configured to:
receive first sensing data acquired at the second time point from the first lidar,
perform fusion of the first sensing data acquired at the second time point for each of divided regions in the depth image of the first time point, and
perform fusion of the first sensing data acquired at the second time point for each of divided regions in the depth image of the second time point.
11. The electronic device of claim 10, wherein the processor is configured to:
acquire a moving distance value of a vehicle from the first time point to the second time point,
apply the moving distance value to the depth image captured at the first time point, and
perform fusion of the first sensing data acquired at the second time point into the depth image.
12. The electronic device of claim 1, wherein the processor is configured to:
receive location data of another vehicle through a communicator,
detect an object corresponding to the another vehicle from the depth image, and
correct a value of a distance between a vehicle and the another vehicle based on the location data.
13. The electronic device of claim 1, wherein the processor is configured to receive second sensing data from a second lidar,
and further perform fusion of the second sensing data for each of divided regions in the depth image.
14. An electronic device for a vehicle, comprising:
a first sensing device mounted in a vehicle, the first sensing device comprising a first camera generating first image data and a first lidar generating first sensing data;
a second sensing device mounted in the vehicle spaced apart from the first sensing device, the second sensing device comprising a second camera generating second image data and a second lidar generating second sensing data; and
at least one processor configured to generate a depth image based on the first image data and the second image data and perform fusion of the first sensing data and the second sensing data for each of divided regions in the depth image.
15. The electronic device of claim 14, wherein the second sensing device is mounted in the vehicle at a same height as the first sensing device with respect to a ground.
16. The electronic device of claim 15, wherein the first camera is mounted in the vehicle such that a line connecting a first principal point of a first image acquired by the first camera and a second principal point of a second image acquired by the second camera is parallel to a horizontal line.
17. A method of operating an electronic device for a vehicle, the method comprising:
receiving, by at least one processor, first image data from a first camera;
receiving, by the at least one processor, second image data from a second camera;
receiving, by the at least one processor, first sensing data from a first lidar;
generating, by the at least one processor, a depth image based on the first image data and the second image data; and
fusing, by the at least one processor, the first sensing data for each of divided regions in the depth image.
18. The method of claim 17, wherein the fusing comprises:
acquiring, by the at least one processor, red-green-blue (RGB) data for each region of the depth image; and
acquiring, by the at least one processor, a distance value for each region of the depth image based on a table in which distance values for colors are arranged.
19. The method of claim 17, wherein the fusing comprises fusing, by the at least one processor, the depth image and the first sensing data using a simultaneous localization and mapping (SLAM) algorithm.
20. The method of claim 17, wherein the fusing comprises:
receiving location data of another vehicle through a communicator;
detecting an object corresponding to the another vehicle from the depth image; and
correcting a value of a distance to the object based on the location data.
US16/603,049 2019-05-31 2019-05-31 Electronic device for vehicle and method of operating electronic device for vehicle Active 2040-02-24 US11507789B2 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2019/006623 WO2020241954A1 (en) 2019-05-31 2019-05-31 Vehicular electronic device and operation method of vehicular electronic device

Publications (2)

Publication Number Publication Date
US20210406618A1 true US20210406618A1 (en) 2021-12-30
US11507789B2 US11507789B2 (en) 2022-11-22

Family

ID=68067789

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/603,049 Active 2040-02-24 US11507789B2 (en) 2019-05-31 2019-05-31 Electronic device for vehicle and method of operating electronic device for vehicle

Country Status (3)

Country Link
US (1) US11507789B2 (en)
KR (1) KR20190107283A (en)
WO (1) WO2020241954A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210293963A1 (en) * 2015-04-01 2021-09-23 VayaVision Sensing, Ltd. Apparatus for acquiring 3-dimensional maps of a scene
US11416998B2 (en) * 2019-07-30 2022-08-16 Microsoft Technology Licensing, Llc Pixel classification to reduce depth-estimation error
US11904850B2 (en) * 2020-08-25 2024-02-20 Hyundai Mobis Co., Ltd. System for and method of recognizing road surface
US11940804B2 (en) 2019-12-17 2024-03-26 Motional Ad Llc Automated object annotation using fused camera/LiDAR data points

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7290104B2 (en) * 2019-12-23 2023-06-13 株式会社デンソー SELF-LOCATION ESTIMATING DEVICE, METHOD AND PROGRAM
US20220113419A1 (en) * 2020-10-13 2022-04-14 Waymo, LLC LIDAR Based Stereo Camera Correction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080113982A (en) * 2007-06-26 2008-12-31 주식회사 뉴크론 Apparatus and method for providing 3d information of topography and feature on the earth
US20140267243A1 (en) * 2013-03-13 2014-09-18 Pelican Imaging Corporation Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies
US20160295198A1 (en) * 2015-01-08 2016-10-06 David G. Grossman Depth Sensor
US20170243352A1 (en) * 2016-02-18 2017-08-24 Intel Corporation 3-dimensional scene analysis for augmented reality operations
CN110135485A (en) * 2019-05-05 2019-08-16 浙江大学 The object identification and localization method and system that monocular camera is merged with millimetre-wave radar
US10699430B2 (en) * 2018-10-09 2020-06-30 Industrial Technology Research Institute Depth estimation apparatus, autonomous vehicle using the same, and depth estimation method thereof

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5947051A (en) * 1997-06-04 1999-09-07 Geiger; Michael B. Underwater self-propelled surface adhering robotically operated vehicle
US10848731B2 (en) * 2012-02-24 2020-11-24 Matterport, Inc. Capturing and aligning panoramic image and depth data
US9020637B2 (en) * 2012-11-02 2015-04-28 Irobot Corporation Simultaneous localization and mapping for a mobile robot
KR101392357B1 (en) * 2012-12-18 2014-05-12 조선대학교산학협력단 System for detecting sign using 2d and 3d information
US20150077560A1 (en) * 2013-03-22 2015-03-19 GM Global Technology Operations LLC Front curb viewing system based upon dual cameras
US9037396B2 (en) * 2013-05-23 2015-05-19 Irobot Corporation Simultaneous localization and mapping for a mobile robot
US20160014395A1 (en) * 2014-07-10 2016-01-14 Arete Associates Data fusion processing to identify obscured objects
US9933264B2 (en) * 2015-04-06 2018-04-03 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
KR101772178B1 (en) * 2015-12-04 2017-08-25 엘지전자 주식회사 Land mark detecting apparatus and land mark detection method for vehicle
US20170253330A1 (en) * 2016-03-04 2017-09-07 Michael Saigh Uav policing, enforcement and deployment system
KR20180072139A (en) * 2016-12-21 2018-06-29 현대자동차주식회사 Vehicle and method for controlling thereof
US10699438B2 (en) * 2017-07-06 2020-06-30 Siemens Healthcare Gmbh Mobile device localization in complex, three-dimensional scenes
US10447973B2 (en) * 2017-08-08 2019-10-15 Waymo Llc Rotating LIDAR with co-aligned imager
JP2020507137A (en) * 2017-12-11 2020-03-05 ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド System and method for identifying and positioning objects around a vehicle
DE102018203590A1 (en) * 2018-03-09 2019-09-12 Conti Temic Microelectronic Gmbh Surroundview system with adapted projection surface
US10798368B2 (en) * 2018-03-13 2020-10-06 Lyft, Inc. Exposure coordination for multiple cameras
CN108663677A (en) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 A kind of method that multisensor depth integration improves target detection capabilities
US20190349571A1 (en) * 2018-05-11 2019-11-14 Ford Global Technologies, Llc Distortion correction for vehicle surround view camera projections
US10846923B2 (en) * 2018-05-24 2020-11-24 Microsoft Technology Licensing, Llc Fusion of depth images into global volumes
US11100339B2 (en) * 2019-05-20 2021-08-24 Zoox, Inc. Closed lane detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080113982A (en) * 2007-06-26 2008-12-31 주식회사 뉴크론 Apparatus and method for providing 3d information of topography and feature on the earth
US20140267243A1 (en) * 2013-03-13 2014-09-18 Pelican Imaging Corporation Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies
US20160295198A1 (en) * 2015-01-08 2016-10-06 David G. Grossman Depth Sensor
US20170243352A1 (en) * 2016-02-18 2017-08-24 Intel Corporation 3-dimensional scene analysis for augmented reality operations
US10699430B2 (en) * 2018-10-09 2020-06-30 Industrial Technology Research Institute Depth estimation apparatus, autonomous vehicle using the same, and depth estimation method thereof
CN110135485A (en) * 2019-05-05 2019-08-16 浙江大学 The object identification and localization method and system that monocular camera is merged with millimetre-wave radar

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210293963A1 (en) * 2015-04-01 2021-09-23 VayaVision Sensing, Ltd. Apparatus for acquiring 3-dimensional maps of a scene
US11604277B2 (en) * 2015-04-01 2023-03-14 Vayavision Sensing Ltd. Apparatus for acquiring 3-dimensional maps of a scene
US11725956B2 (en) 2015-04-01 2023-08-15 Vayavision Sensing Ltd. Apparatus for acquiring 3-dimensional maps of a scene
US11416998B2 (en) * 2019-07-30 2022-08-16 Microsoft Technology Licensing, Llc Pixel classification to reduce depth-estimation error
US11940804B2 (en) 2019-12-17 2024-03-26 Motional Ad Llc Automated object annotation using fused camera/LiDAR data points
US11904850B2 (en) * 2020-08-25 2024-02-20 Hyundai Mobis Co., Ltd. System for and method of recognizing road surface

Also Published As

Publication number Publication date
WO2020241954A1 (en) 2020-12-03
US11507789B2 (en) 2022-11-22
KR20190107283A (en) 2019-09-19

Similar Documents

Publication Publication Date Title
US11507789B2 (en) Electronic device for vehicle and method of operating electronic device for vehicle
KR102554643B1 (en) Multiple operating modes to expand dynamic range
US10479269B2 (en) Lighting apparatus for vehicle and vehicle having the same
US20210122364A1 (en) Vehicle collision avoidance apparatus and method
US10496892B2 (en) Apparatus for providing around view image, and vehicle
US11242068B2 (en) Vehicle display device and vehicle
EP3229458A1 (en) Driver assistance apparatus
JPWO2017122552A1 (en) Image processing apparatus and method, program, and image processing system
US20210362733A1 (en) Electronic device for vehicle and method of operating electronic device for vehicle
US11100675B2 (en) Information processing apparatus, information processing method, program, and moving body
US20240075866A1 (en) Information processing apparatus, information processing method, photographing apparatus, lighting apparatus, and mobile body
KR102470298B1 (en) A method of correcting cameras and device thereof
KR102077575B1 (en) Vehicle Driving Aids and Vehicles
WO2020031812A1 (en) Information processing device, information processing method, information processing program, and moving body
US20210327173A1 (en) Autonomous vehicle system and autonomous driving method for vehicle
US20210362727A1 (en) Shared vehicle management device and management method for shared vehicle
WO2019163315A1 (en) Information processing device, imaging device, and imaging system
US20210354634A1 (en) Electronic device for vehicle and method of operating electronic device for vehicle
CN116359943A (en) Time-of-flight camera using passive image sensor and existing light source
US11414097B2 (en) Apparatus for generating position data, autonomous vehicle and method for generating position data
KR102662730B1 (en) Driver assistance apparatus and method thereof
EP3875327B1 (en) Electronic device for vehicle, operating method of electronic device for vehicle
US20220120568A1 (en) Electronic device for vehicle, and method of operating electronic device for vehicle
US20220076580A1 (en) Electronic device for vehicles and operation method of electronic device for vehicles
KR102124998B1 (en) Method and apparatus for correcting a position of ADAS camera during driving

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE