WO2024115493A1 - Electronic device and method - Google Patents

Electronic device and method Download PDF

Info

Publication number
WO2024115493A1
WO2024115493A1 PCT/EP2023/083378 EP2023083378W WO2024115493A1 WO 2024115493 A1 WO2024115493 A1 WO 2024115493A1 EP 2023083378 W EP2023083378 W EP 2023083378W WO 2024115493 A1 WO2024115493 A1 WO 2024115493A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
vehicle
tof
electronic device
light
Prior art date
Application number
PCT/EP2023/083378
Other languages
French (fr)
Inventor
Anthony ANTOUN
Original Assignee
Sony Semiconductor Solutions Corporation
Sony Depthsensing Solutions Sa/Nv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Semiconductor Solutions Corporation, Sony Depthsensing Solutions Sa/Nv filed Critical Sony Semiconductor Solutions Corporation
Publication of WO2024115493A1 publication Critical patent/WO2024115493A1/en

Links

Abstract

An electronic device comprising circuitry configured to detect a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection.

Description

ELECTRONIC DEVICE AND METHOD TECHNICAL FIELD The present disclosure generally pertains to the field of imaging, and in particular to devices and methods for multi-modal image capturing. TECHNICAL BACKGROUND Autonomous or semi-autonomous vehicles are equipped with environment sensors that detect the vehicle's surroundings and whose data are evaluated in a control unit by means of suitable software. Traditional 2D cameras are complemented by other camera technologies such as stereo cameras, IR cameras, RADAR, LiDAR, and Time-of-Flight (ToF) cameras. On the basis of the information obtained by such sensors, a control unit can automatically trigger and execute braking, speed, distance, compensation and/or evasive action controls via appropriate actuators. The control unit can also warn or inform the diver of the vehicle about distances to objects around the vehicle, and the control unit can assist during parking or lane changing. Environment sensors typically provide their measurements in the form of point clouds. The point clouds provided by the sensors are used to obtain reliable information about possible objects in the vehicle's path or on a collision course with the vehicle. Such technologies have become increasingly important in recent years, as driver assistance systems and systems for autonomous driving rely on techniques that enable 3D spatial recognition. Although there exist techniques for driver assistance, it is generally desirable to improve these existing techniques. SUMMARY According to a first aspect, the disclosure provides an electronic device comprising circuitry configured to detect a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. According to a further aspect, the disclosure provides a method comprising detecting a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. According to a further aspect, the disclosure provides a computer program comprising instructions which are configured to, when executed on a processor, perform detecting a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. Further aspects are set forth in the dependent claims, the following description, and the drawings. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments are explained by way of example with respect to the accompanying drawings, in which: Fig.1 schematically shows a vehicle with a driver assistance system and ToF camera for parking spot detection; Fig.2 schematically shows an example of a configuration of a driver assistance system for parking spot detection; Fig.3 schematically shows an example of a point-cloud as produced by a ToF Datapath of a driver assistance system for parking spot detection within a parking garage; Fig.4 shows an example of a 3D model of a detail of a scene as produced by 3D reconstruction; Fig.5 schematically shows line of sights of a driver assistance system for parking spot detection; Fig.6 schematically shows a vehicle with a driver assistance system for parking spot detection based on a measurement of multi-path reflections in the case that a parking spot is empty; Fig.7a schematically shows a first example of a derivation, by the driver assistance system for parking spot detection, of vehicle models of respective parked vehicles from a reconstructed 3D model; Fig.7b schematically shows a second example of a derivation, by the driver assistance system for parking spot detection, of vehicle models of respective parked vehicles from a reconstructed 3D model; Fig.8 schematically shows a process of determining, by means of raytracing, a travel time of an illumination light beam, which is reflected between two vehicles based on vehicle models as the objects in the scene; Fig.9a schematically shows an example of predicted photon arrival times and a measured dToF histogram for the case that a parking spot is empty; Fig.9b schematically shows an example of predicted phasors and a measured iToF phasor diagram for the case that a parking spot is empty; Fig.10 schematically shows a vehicle with a driver assistance system and ToF camera for parking spot detection when the parking spot is not empty; Fig.11 schematically shows a vehicle with a driver assistance system for parking spot detection based on a measurement of multi-path reflections when the parking spot is not empty; Fig.12a schematically shows an example of predicted photon arrival times and a measured dToF histogram for the case that a parking spot is not empty; Fig.12b schematically shows an example of predicted phasors and a measured iToF phasor diagram for the case that a parking spot is not empty; Fig.13a schematically shows a flowchart of process of determining a state of a parking spot in the case that a dToF camera is employed; Fig.13b schematically shows a flowchart of process of determining a state of a parking spot in the case that an iToF camera is employed; Fig.14a schematically shows the operation principle of a SPAD photodiode; Fig.14b schematically shows the basic operational principle of an indirect Time-of-Flight imaging system which can be used for depth sensing; Fig.15 schematically provides an example of a process of determining a pixel value according to a "photon counting" approach; Fig.16 shows an example of implementing a 3D reconstruction; Fig.17 is a block diagram depicting an example of schematic configuration of a vehicle control system; Fig.18 is a diagram of assistance in explaining an example of installation positions of an outside-vehicle information detecting section and an imaging section; Fig.19 schematically shows a diagram for binary Bayesian hypothesis testing to classify a measurement result in the case of an iToF measurement. DETAILED DESCRIPTION OF EMBODIMENTS Before a detailed description of the embodiments under reference of Fig.1 to Fig.19, general explanations are made. The embodiments described below in more detail disclose an electronic device comprising circuitry configured to detect a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. Circuitry may include a processor, a memory (RAM, ROM, or the like), a storage, input means (mouse, keyboard, camera, etc.), output means (display (e.g., liquid crystal, (organic) light emitting diode, etc.), loudspeakers, etc., a (wireless) interface, etc., as it is generally known for electronic devices (computers, smartphones, vehicle control systems, etc.). Moreover, it may include sensors for sensing still image or video image data (image sensor, camera sensor, video sensor, ToF sensor, SPAD sensor, LiDAR sensor, etc.), for sensing a fingerprint, for sensing environmental parameters (e.g., radar, humidity, light, temperature), etc. A non-line of sight object may for example be an object that is not directly visible in an image. For example, a non-line of sight object is not visible in primary rays. A non-line of sight object may for example be visible in reflections. The circuitry may be configured to obtain the ToF information from a photon histogram captured by the ToF imaging sensor. For example, peaks in a photon histogram that relate to primary, secondary, and subsequent reflections may be identified. For example, the bins of a photon histogram may represent the time the photons have propagated from the illumination source to the scene and back to a ToF camera. Because the speed of light is constant in vacuum the propagation time also represents light propagation. The circuitry may be configured to detect at least one of the presence, a position, and a velocity of the non-line of sight object. The circuitry may be configured to determine the state of a spot based on the detection of the non-line of sight object. The state of the spot may for example comprise information on whether or not the spot is empty. The spot may for example be a parking spot and the state of the spot may for example comprise information on whether or not the parking spot is empty. In some embodiments, the ToF information comprises information on photon arrival times related to multi-path reflections. The circuitry may be configured to obtain the model-based information by a raytracing process. For example, in 3D reconstruction, a field of normal vectors of the surfaces of the vehicles may be calculated. From the field of normal vectors, the orientation of surfaces is known which may serve as a basis for determining light reflections. The model-based information may comprise predicted photon arrival times. The predicted photon arrival times may relate to at least one of secondary, tertiary, and higher order light reflections. The model-based information may comprise predictions on multi-path light reflections. The circuitry may be configured to determine if the ToF information obtained from reflected light deviates from the model-based information. For example, model-based predicted positions of peaks of primary, secondary, and subsequent reflections in a photon histogram may be compared to the respective positions of peaks measured in a ToF histogram. Here, the position in the photon histogram may correspond to an arrival time of photons. For example, if the comparison result is within a predefined threshold, it may be determined that a parking spot is empty. But if it is determined that the comparison result is outside the threshold it may be determined that the parking spot is not empty i.e., occupied or blocked. The circuitry may be configured to determine if the position of a reflection indicated by the ToF information deviates from a model-based prediction. For a case in which a dToF camera is employed the peaks in a histogram measured by the dToF camera are determined and compared to model predictions, based on predicted arrival times of the reflected light. For another case in which iToF is employed, the circuitry may be configured to determine if the measured iToF phasor is affected by scattering due to a longer reflection (e.g., empty parking space), and therefore deviates from a model-based prediction of the phasor, which is based on predicted arrival times of the reflected light. The circuitry may be configured to perform the model-based prediction of light reflection based on a reconstructed 3D model of a captured scene. Relating depth information obtained from ToF measurements with a reconstructed model (i.e., a running 3D reconstruction) of a scene may comprise any processing performed on raw ToF measurements, such as processing raw measurements obtained from the sensor in a ToF Datapath. Relating depth information obtained from ToF measurements with a reconstructed model may also comprise transforming ToF measurements into a point cloud, registering the point cloud to the reconstructed model, and the like. The circuitry may be configured to reconstruct and/or update the model of the scene based on the depth information obtained from ToF measurements. The model of the scene may for example be updated based on point cloud information, and/or registered point cloud information. The circuitry may be configured to perform the model-based prediction of light reflection based on one or more vehicle models. For example, the vehicles within the field of view of a camera may be identified. This identifying of vehicle or other objects may use a point cloud or an RGB or grayscale image as inputs, 3D reconstruction techniques, and machine learning algorithm to classify areas of the images, parts of the point cloud, or parts of a reconstructed 3D model as vehicles. The modeled surfaces (and their orientation) of vehicles comprised by the vehicle models may be used in raytracing type processes to calculate the intersecting points of the light beams to determine where the light beam is partly reflected and partly scattered. From the resulting light beam the distance that the light of the secondary and subsequent reflections traversed to reach those reflection and reflection points and travel back to the ToF camera system may be calculated. A vehicle model may model parts of a vehicle that are not visible to the ToF imaging sensor in imaging information obtained from primary reflections. For example, when a point cloud is captured from the position of a ToF camera system there are shaded areas where objects hide other objects. Further, an opposite side or turned away side of objects can in general not be captured. According to the embodiments surfaces of the vehicles which are not comprised by the point cloud as captured by the sensors are modeled. The circuitry may be configured to determine a vehicle model based on a 3D model of a scene. A vehicle model may model parts of a vehicle that are not present in the 3D model of the scene. The circuitry may be configured to emit light, and to obtain the ToF information from at least one of secondary, tertiary, and higher order reflections of this emitted light. The circuitry may be configured to emit light, and wherein the circuitry is configured to model the light path of this emitted light in order to obtain the model-based prediction of light reflection. The embodiments also disclose a method comprising determining a state of a parking spot based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. The embodiments also disclose a computer-implemented method and/or computer program comprising instructions which are configured to, when executed on a processor, perform the methods and processes described above. The embodiments also disclose a computer program product comprising instructions which are configured to, when executed on a processor, perform the methods and processes described above. Fig.1 schematically shows a vehicle with a driver assistance system and ToF camera for parking spot detection. A vehicle 100 which comprises a driver assistance system for parking spot detection approaches three parked vehicles 101, 102 and 103. An empty parking spot is located in between the vehicles 102 and 103. A ToF camera 110 of the driver assistance system has a field of view 105 and the vehicles 101, 102 and 103 are within the field of view 105. A line of sight 111 which is tangent to the vehicle 102 demarcates a non-visible area 120 of the parking spot. The part 120 of the parking spot located above the line of sight 111 is not visible to the ToF camera system 110 and the driver of the vehicles 100 due to vehicle 102. The ToF camera 110 can for example be an iToF (indirect Time of Flight) camera or a dToF (direct Time of Flight) camera. Because of the non-visible area 120 neither the ToF camera system 110 nor the driver of the vehicle 100 can determine via a direct line of sight whether the parking spot between vehicles 102 and 103 is empty or whether a vehicle or any other object is blocking the parking spot in the non-visible area 120. Thus, from the point of view of the ToF camera system 110 and the driver of the vehicle 100 the parking spot is potentially empty and will hereinafter be referred to as the potentially empty parking spot. Driver assistance system for detecting a non-line-of-sight object According to the embodiments described below, a driver assistance system is provided that detects non-line of sight (NLOS) objects based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. The embodiments described below e.g., relate to the detection of non-line-of-sight objects with histogram analysis in an automotive context. For example, a driver assistance system may be configured to detect at least one of the presence, a position, and a velocity of an NLOS object. By detecting an NLOS object, the driver assistance system can for example determine the state of a spot, e.g., a parking spot. For example, the state of the spot determined by the driver assistance system may comprise information on whether or not the spot is empty. Fig.2 schematically shows an example of a configuration of a driver assistance system for parking spot detection. A scene 201 (e.g., the scenario with vehicles 101, 102, 103 described in Fig.1 above) is illuminated by a ToF camera 202 (see also Fig.2) and the reflected light from the scene 201 is captured by the ToF camera 202. The ToF camera 202 can for example be an iToF (indirect Time of Flight) camera or a dToF (direct Time of Flight) camera. The ToF camera 202 comprises an ToF camera controller 202-1 which controls the operation of the illuminator and the sensor of the camera according to configurations modes which define configuration settings related to the operation of the imaging sensor and the illumination sensor (such as exposure time, or the like). The controller 202-1 provides the ToF measurements (e.g., SPAD histograms, iToF phase diagrams) to a ToF Datapath 202-2 which processes the ToF measurements into a ToF point cloud (defined e.g., in a 3D camera coordinate system). The ToF point cloud is a point representation of the ToF measurements which describes the current scene as viewed by the ToF camera. The ToF point cloud may for example be represented in a cartesian coordinate system of the ToF camera. This ToF point cloud obtained from the ToF Datapath is forwarded to a 3D reconstruction 204. 3D reconstruction 204 creates and maintains a three-dimensional (3D) model of the scene 201 based on technologies known to the skilled person, for example based on techniques described in more detail with regard to Fig.16 below. In particular, 3D reconstruction 204 comprises a pose estimation 204-1 which receives the ToF point cloud. The pose estimation 204-1 further receives auxiliary input from auxiliary sensors 203, and a current 3D model from a 3D model reconstruction 204-2. Based on the ToF point cloud, the auxiliary input, and the current 3D model, the pose estimation 204-1 applies algorithms to the measurements to determine the pose of the ToF camera (defined by e.g., position and orientation) in a global scene (“world”). Such algorithms may include for example be the iterative closest point (ICP) method between point cloud information and the current 3D model, or for example a SLAM (Simultaneous localization and mapping) pipeline. Knowing the camera pose, the pose estimation 204-1 “registers” the ToF point cloud obtained from Datapath 202-2 to the global scene, thus producing a registered point cloud which represents the point cloud in the camera coordinate system as transformed into a global coordinate system (e.g., a “world” coordinate system) in which a model of the scene is defined. The registered point cloud obtained by the pose estimation 204-1 is forwarded to a 3D model reconstruction 204-2. The 3D model reconstruction 204-2 updates a 3D model of the scene based on the registered point cloud obtained from the pose estimation 204-1 and based on auxiliary input obtained from the auxiliary sensors 203. An exemplifying process of updating a 3D model is described in more detail with regard to Fig.16 below. The updated 3D model of the scene 201 may be stored in a 3D model memory (not shown in Fig. 2) and is provided to NLOS object detection 205. The NLOS object detection 205 may for example determine the state of a parking spot based on the presence or non-presence of NLOS objects within the parking spot. As described above, the pose estimation 204-1 and the 3D model reconstruction 204-2 obtain auxiliary input from auxiliary sensors 203. The auxiliary sensors 203 comprise a colour camera 203-1 which provides e.g., an RGB/LAB/YUV image of the scene 201, from which sparse or dense visual features can be extracted to perform conventional visual odometry, that is determining the position and orientation of the current camera pose. The auxiliary sensors 203 may further comprise an event-based camera 203-2 providing e.g., high frame rate cues for visual odometry from events. The auxiliary sensors 203 may further comprise an inertial measurement unit (IMU) 203-3 which provides e.g., acceleration and orientation information, that can be suitably integrated to provide pose estimates. These auxiliary sensors 203 gather information about the scene 201 in order to aid the 3D reconstruction 204 in producing and updating a 3D model of the scene (see Fig.3 and corresponding description). The ToF camera 202 and the auxiliary sensors 203 described in Fig.2 above may for example be part of an outside-vehicle information detecting unit (7400 in Fig.17) including an imaging section (7410 in Fig.17).3D reconstruction 204 and parking spot detection 205 may be implemented in one or more processors, e.g., processors (such as integrated control circuit 7600 in Fig.17) of a vehicle control system. It should be noted that in the embodiment of Fig.2, parking spot detection is only mentioned as an exemplifying use of the detection of non-line of sight objects based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. In other embodiments, the detection of non-line of sight objects is used to observe incoming vehicles or objects in perpendicular streets where cars are not directly visible for example. Also, the absolute speed of objects can be tracked with secondary histogram peaks with e.g., Kalman filter to determine e.g., whether a car is on a collision course with the object in NLOS. Other possible automotive applications of detection of non-line of sight objects are the detection of space to change lane. The technology disclosed in the embodiments may also be applied in a non-automotive context. For example, the technology may be used for identifying the inventory of a space such as a warehouse (with robots, drones), or in the context of shipping (where drones need to find a dedicated free spot). ToF Datapath A ToF Datapath (202-2 in Fig.2) is configured to receive camera raw data (ToF measurements) and to process this raw data further, e.g., into a ToF point cloud (defined e.g., in a 3D camera coordinate system). The ToF Datapath may for example include iToF related processing which generate, from iToF sensor data, depth frames used for 3D reconstruction. Still further, the ToF Datapath may comprise processing related to the generation, from dToF sensor data, of histograms used for NLOS detection. The ToF Datapath may also perform processing such as transforming a depth frame into a vertex map and normal vectors. The ToF Datapath may also comprise a sensor calibration block, which, by calibration, removes the phases, sources of systematic error such as temperature drift, cyclic error due to spectral aliasing on the return signal, and any error due to electrical non-uniformity of the pixel array. Based on the phase value ^ obtained from the measurements q^, … , q^ at a pixel according to equation Eq.1 the corresponding depth value ^ for the pixel is determined as follows: ^ = ^ ସగ^^^^ ^ (Eq.1) with ^୭^ being the modulation frequency of the emitted signal and c being the speed of light. For each frame ^, from the depth measurement ^^ for each pixel a three-dimensional coordinate within the camera coordinate system is determined, which yields a ToF point cloud for the current frame ^. Further, the ToF datapath 202-2 may comprise filters that improve the signal quality and mitigate errors on the point cloud, such as ToF data denoising, removal of pixels incompatible with the viewpoint (e.g., “flying” pixels between foreground and background), removal of multipath effects such as scene, lens, or sensor scattering. 3D Reconstruction 3D reconstruction 204 of Fig.1 receives ToF point clouds and produces a 3D model of the scene 201 while simultaneously tracking the ToF camera’s motion (i.e., the ToF camera’s current pose). This problem is also known to the skilled person as “Simultaneous localization and mapping”. Several methods exist to solve this for example Extended Kalman Filter Based SLAM, Parallel Tracking and Mapping or the like. An overview of different SLAM methods is for example given in the paper C. Cadena et al., “Past, Present, and Future of Simultaneous Localization and Mapping: Towards the Robust-Perception Age,” IEEE Transactions on Robotics, vol.32, no.6, pp.1309–1332, 2016. Auxiliary sensor data (e.g., from the auxiliary sensors 203 of Fig.2) may optionally be used at several stages to improve the 3D model reconstruction. The main use may be the providing of additional data streams that can be used to refine or optimize the quality of the pose estimation (204-1 in Fig.1), by fusing diverse cues and complementary features in the sensor data. For example, the extraction of sparse features from RGB frames may be used to perform visual odometry by finding feature correspondences in consecutive frames. Therefore, sensor data may be used jointly to estimate a single pose in the pose estimation (for example an ICP method or a SLAM pipeline). The auxiliary sensor unit and the ToF system may operate in sensor fusion camera kits for a specified target use-case. 3D Model Reconstruction 204-2 provides the updated 3D model to parking spot detection 205 of a driver assistance system. Fig.3 schematically shows an example of a point-cloud as produced by a ToF Datapath of a driver assistance system for parking spot detection within a parking garage. The point cloud 301 depicts a parking garage with parking spots and pillars on the left and the right side as well as a central driving lane 305. The point cloud may for example be obtained by the ToF Datapath (see 202-2 in Fig.2) of a driver assistance system for parking spot detection as described in Fig.2 above. Two vehicles 302 and 304 as well as a pillar 303 between the two vehicles 302 and 304 are depicted. The distance in between the two vehicles 302 and 304 is large enough for a potentially empty parking spot. However, both the vehicle 304 and the pillar 303 obstruct the view into the potentially empty parking spot. No (direct) line of sight can determine if the parking spot is actually empty. Based on this point cloud shown in Fig.3, 3D model reconstruction (204-2 in Fig.2) can determine a 3D model of the scene (here, the parking garage) and provide this 3D model to a parking spot detection (205 in Fig.2). 3D Model Fig.4 shows an example of a 3D model of a detail of a scene as produced by 3D reconstruction such as described in Fig.2 above. The 3D model is implemented as a triangle mesh grid 401. This triangle mesh may be a local or global three-dimensional triangle mesh. In alternative embodiments a 3D model may also be described by a local or global voxel representation of a point cloud (uniform or octree); a local or global occupancy grid; a mathematical description of the scene in terms of planes, statistical distributions (e.g., Gaussian mixture models), or similar attributes extracted from the measured point cloud. In another embodiment a model may be characterized as a mathematical object that fulfills one or more of the following aspects: it is projectable to any arbitrary view, it can be queried for nearest neighbors (closest model points) with respect to any input 3D point, it computes distances with respect to any 3D point cloud, it estimates normals and/or it can be resampled at arbitrary 3D coordinates. The model may for example be implemented as a triangle mesh grid (e.g., a local or global three- dimensional triangle mesh), a local or global voxel representation of a point cloud (uniform or octree), a local or global occupancy grid, a mathematical description of the scene in terms of planes, statistical distributions (e.g., Gaussian mixture models), or similar attributes extracted from the measured point cloud. The model is typically constructed progressively by fusing measurements from available data sources, e.g., including but not limited to depth information, color information, inertial measurement unit information, event-based camera information. Based on the surface elements defined by the 3D model (e.g., the elements of the mesh grid) normal vectors of the surface elements can be calculated which can be used for purposes of raytracing processes (see Fig.11 below), or the like, used in the parking spot detection described in the embodiments below in more detail. Classification techniques are known to the skilled person which structure a detected scene into different objects, e.g., cars, street markings, walls, etc. These classification techniques can be for example pattern matching and might be based on manual feature extraction such as a histogram of oriented gradients. Further techniques can use convolutional neural networks, deep learning in general or a “You Only Look Once” classifier. Multi-path interference (MPI) It is known that the light reflected from the scene includes so-called multipath interference (MPI). Multipath Interference (MPI) is caused by multiple paths from the illumination source to the same pixel. MPI is typically one of the most significant error sources in ToF depth measurements. However, as explained with reference to the embodiments described below in more detail, information from multi-path interference can be used in a beneficial way to enhance the evaluation of a detected scene. In particular, the embodiments described below evaluate the information obtained from multi-path inference caused by light reflections on object surfaces. Parking Spot Detection Based on a 3D model provided by a 3D model reconstruction (e.g., 204-2 in Fig.2) a parking spot detection (205 in Fig.2) can determine relevant surfaces, parking orientations and distances between vehicles (e.g., 102, 103 in Fig.1). Fig.5 schematically shows line of sights of a driver assistance system for parking spot detection. Depicted is the same parking spot detection situation as described in Fig.1. A ToF camera 110 of the driver assistance system has a field of view 105 and the vehicles 101, 102 and 103 are within the field of view 105. A line-of-sight 111a intersects with vehicle 101 within the field of view 105 of the ToF camera 110 and contributes to a determination of a surface model of vehicle 101. Further, a line-of-sight 111b intersects with vehicle 102 and contributes to a determination of a surface model of vehicle 102. Because line of sight 111b intersects with vehicle 102 and is blocked by vehicle 102, it does not reach the non-visible area 120 in the empty parking spot between vehicles 102 and 103. Further, line of sights 111c and 111d intersect with vehicle 103 and contribute to a determination of a surface model of vehicle 103. Line of sight 111c hits vehicle 103 in a region of interest 115 (ROI) on vehicle 103. Such a path of secondary and subsequent reflections is further explained with reference to Fig.6. Fig.6 schematically shows a vehicle with a driver assistance system for parking spot detection based on a measurement of multi-path reflections in the case that a parking spot is empty. Depicted is the parking spot detection situation of Fig.5 with an illumination and back reflection, back scattering light beam path indicated by arrows 106, 107 and 108. Within a ROI 115, illumination beam 106 hits the surface of vehicle 103 and a part of the illumination beam 106 is reflected according to the laws of reflection, wherein the incoming beam 106 and the reflected beam 107 have the same angle to the surface normal 112 of the vehicle surface in ROI 115. Part of the illumination beam 106 is scattered, at the surface of vehicle 103 in ROI 115, back to the ToF camera 110 (following the path of illumination beam 106), is captured by the sensor of ToF camera 110, and produces a peak (see 911 in Fig.9a) in the ToF histogram at a position that corresponds to the travelling distance of the light. The reflected beam 107 hits the surface of vehicle 102 and a part of the reflected beam 107 is reflected according to the laws of reflection to generate reflected beam 108. Part of the illumination beam 107 is scattered, at the surface of vehicle 102, back to the ToF camera 110 (following the path of reflected beam 107, and illumination beam 106) is captured by the sensor of ToF camera 110 and produces another peak (see 912 in Fig.9a) in the ToF histogram at a position that corresponds to the travelling distance of the light. The reflected beam 108 hits the surface of vehicle 103 and a part of the reflected beam 108 is scattered, at the surface of vehicle 102, back to the ToF camera 110 (following the path of reflected beam 108, reflected beam 107, and illumination beam 106), is captured by the sensor of ToF camera 110, and produces another peak (see 913 in Fig.9a) in the ToF histogram at a position that corresponds to the travelling distance of the light. As in Fig.6, the non-visible area 120 corresponds to an empty parking spot, the light beams 106, 107 and 108 can pass the non-visible area 120 in an unimpeded way. If, otherwise, the space between vehicles 102 and 103 would not be an empty parking spot but occupied by another vehicle, the light beams 106, 107 and 108 could not pass the non-visible area 120 in an unimpeded way and would be blocked by the other vehicle. The path of the light beam along the arrows 106, 107 and 108 is both modeled and measured. In modeling the light beam along the arrows 106, 107 and 108 the shape parts of the vehicles 102 and 103 can be assumed by identifying e.g., make and model and filling in gaps predefined models that are the oriented the same way the measured vehicles are oriented. Fig.7a schematically shows a first example of a derivation, by the driver assistance system for parking spot detection, of vehicle models of respective parked vehicles from a reconstructed 3D model. In Fig.7a, it is depicted the surface model M1 of a first vehicle (102 in Fig.6) as determined by a 3D model reconstruction (204-2 in Fig.2). It is further depicted the surface model M2 of a second vehicle (103 in Fig.6) as determined by 3D model reconstruction. The surface models M1 and M2 comprise the respective visible parts of the two vehicles in a scene captured by a ToF camera. From the models M1 and M2 of the visible surfaces of the vehicles, respective vehicle models VM1 and VM2 are created. The vehicle models VM1 and VM2 are models of the complete vehicle shapes, including the non-visible surfaced of the vehicles. That is, with the vehicle models VM1 and VM2 the surfaces F1 and F2 which face each other in between the two vehicle models VM1 and VM2 are known. Surface F1 may for example correspond to the surface of the first vehicle (102 in Fig.6) facing an empty parking spot (see Fig.6), and surface F2 may for example correspond to the surface of the second vehicle (103 in Fig.6) facing the empty parking spot. In the embodiment of Fig.7a, the vehicles models VM1 and VM2 are schematic box models which provide a simplified geometrical shape of a vehicle. The dimensions of the respective sides of the boxes correspond to the dimensions of the vehicles as determined from the surface models M1 and M2 of the vehicles. The vehicle models may also comprise information concerning the position and the orientation of the vehicles, e.g., in a global coordinate system. Fig.7b schematically shows a second example of a derivation, by the driver assistance system for parking spot detection, of vehicle models of respective parked vehicles from a reconstructed 3D model. Depicted are the positions of wheels W1 and W2 of a first vehicle (102 in Fig.6) as determined by a 3D model reconstruction (204-2 in Fig.2). An axle line A1 from the center of wheel W1 to the center of wheel W2 is determined. From axle line A1, a surface F1 of a vehicle model VM1 is derived. The surface F1 is determined as the surface that is perpendicular to the axel line A1 and intersecting the outer surface of wheel W2. Further depicted are the positions of wheels W3 and W4 of a second vehicle (103 in Fig.6) as determined by 3D model reconstruction (204-2 in Fig.2). An axle line A2 from the center of wheel W3 to the center of wheel W4 is determined. From axle line A2, a surface F2 of a vehicle model VM2 is derived. The surface F2 is determined as the surface that is perpendicular to the axel line A2 and intersecting the outer surface of wheel W3. In this way, the surface F1 intersects with the wheel W2 on the side that is facing away from wheel W1 and towards the vehicle model VM2 and the surface F2 intersects with the wheel W3 on the side that is facing away from wheel W4 and towards the vehicle model VM1. Surface F1 may for example correspond to the surface of the first vehicle (102 in Fig.6) facing an empty parking spot (see Fig.6), and surface F2 may for example correspond to the surface of the second vehicle (103 in Fig.6) facing the empty parking spot. Fig.8 schematically shows a process of determining, by means of raytracing, a travel time of an illumination light beam, which is reflected between two vehicles based on vehicle models as the objects in the scene. The process described in Fig.8 is a schematic description of ray tracing technology well-known to the skilled person and frequently used in 3D computer graphics. In this example, ray tracing is used as a technique for modeling light transport for use in a rendering algorithm which implements the driver assistance system for parking spot detection as described in the Figures above. In Fig.8, the vehicle models VM1 and VM2 as derived with reference to the process of Fig.7a or 7b are depicted. These vehicle models VM1 and VM2 are models of the complete vehicle shapes, including the non-visible surfaces of the vehicles. That is, with the vehicle models VM1 and VM2 the surfaces F1 and F2 which face each other in between the two vehicle models VM1 and VM2 are known. Surface F1 corresponds to the surface of a first vehicle (102 in Fig.6) facing an empty parking spot (see Fig.6), and surface F2 corresponds to the surface of a second vehicle (103 in Fig.6) facing the empty parking spot. Point PToF is the position of the ToF camera (see 110 in Fig.6), e.g., in a global coordinate system. Point PROI is the position of a ROI (see 115 in Fig.6), e.g., in the global coordinate system. Starting from the position PToF of an illuminator in a ToF camera, the illumination light emitted by the ToF camera travels along light path l1 (“primary ray”) to the position PROI where the illumination beam hits surface F2. The direction of the light path l1 may for example be defined by a respective pixel of the ToF sensor which corresponds to a region of interest ROI. The illumination light that hits surface F2 at PROI generates (by means of diffuse reflection and specular reflection) reflected light. Part of this reflected light returns to the ToF camera along light path l1 and produces events in a pixel of the image sensor at an arrival time that depends on the travelling time (as determined by the travelling distance along light path l1, and the speed of light). Another part of the illumination light that hits surface F2 at point PROI is mirror reflected at point PROI and continues along light path l2 (“secondary beam”). According to the well-known principles of optics, the direction of light path l2 depends on the surface normal N1 of surface F2 at point PROI and the incident angle α of light path l1 on surface F2. Light path l2 is traced back to the position P1 where it hits surface F1. The illumination light that hits surface F1 at P1 generates (by means of diffuse reflection and specular reflection) reflected light. Part of this reflected light returns to the ToF camera along light paths l2 and l1 and produces events in the pixel of the image sensor at an arrival time that depends on the travelling time (as determined by the travelling distance along light paths l1 and l2, and the speed of light). Another part of the illumination light that hits surface F1 at point P1 is mirror reflected at point P1 and continues along light path l3 (“tertiary beam”). The direction of light path l3 depends on the surface normal N2 of surface F1 at point P1 and the incident angle α of light path l2 on surface F1. Light path l3 is traced back to the position P2 where it hits surface F2. The illumination light that hits surface F2 at P2 generates (by means of diffuse reflection and specular reflection) reflected light. Part of this reflected light returns to the ToF camera along light paths l3, l2 and l1 and produces events in the pixel of the image sensor at an arrival time that depends on the travelling time (as determined by the travelling distance along light paths l1, l2, and l3, and the speed of light). At this stage, the recursive raytracing process ends as a predefined maximum number of reflections has been reached. By means of this raytracing technique multiple arrival times of light are predicted that appear as peaks in a dToF photon histogram obtained from a pixel of the dToF camera, in a case where a dToF camera is employed. As in the further processing of the histograms (see Fig.9a and corresponding description) only the arrival times of photons is evaluated, the amount of light collected by the pixel is not essential, so that the raytracing algorithm does not necessarily need to consider details such as a determination of the intensity of light according to principles of specular reflection (e.g., Phong shading) and diffuse reflection (e.g., according to Lambertian shading). In general, in raytracing, each pixel of the sensor may define a respective primary ray, and each primary ray of the image may be traced and used for detecting non-line-of-sight objects. To reduce the computational efforts, a region of interest (ROI) may be selected (see also Figs.5 and 6 above) and performing the raytracing process and photon histogram analysis may be limited to those primary rays that relate to the ROI (e.g., hit the ROI). A region of interest, ROI, may for example be selected on the basis of the reflectivity and normal vector of the pixels of car 103 that are in the FOV. In other embodiments, the region that delimitates the car body (i.e., potential ROI candidate pixels) can be identified using classifiers mentioned above to label every pixel. For example, those pixels that are classified as tires may be disregarded, but pixels that relate to the car chassis may be included into the ROI. The choice of ROI has the constraint that a non- specular point must be chosen for both iToF and dToF measurements. For example, in parking spot detection shown in Fig.5, the reflectivity in a ROI should preferably be such that it is maximal for the car (meaning that the rays will bounce on a highly specular material, with very high intensity, as opposed to the tires for instance), and the normal vector of the selected pixels shall be such that the secondary reflections intersect car 102 or the object in view (e.g. a pillar such in Fig.3 or other obstacle like car 104 in Fig.10). In this manner, when determining a ROI, those rays are selected that are more likely to contain strong secondary or subsequent peaks, therefore leading to a good signal to noise ratio, e.g., a good histogram signature in dToF or a high scattering phasor in iToF. An example of such an area could be a 10 x 10 pixel region on the back door of car 103 in Fig.5. To the contrary, reflections that will not intersect car 102 (e.g., primary rays that are reflected from the backside of car 103 back to the street) or are otherwise not useable (e.g. primary rays that hit a black tire) may be abandoned. In the case in which an iToF camera is employed as the ToF camera the multiple predicted arrival times of light can be used to calculate and thus predict multiple phasors of the iToF measurement. A phase can be determined between a modulated light signal that is send out by the iToF camera to illuminate the vehicles (102 and 103 in Fig.5) and the reflected light with the same modulation. Using the arrival times or phase difference of the reflections and the speed of light the distances can be predicted. The modulated reflected light that returns to the iToF camera along light path (l1 in Fig.8) has a phase difference to the emitted light that depends on the travelling time (as determined by the travelling distance along light path l1, and the speed of light). From this phased difference a phasor (921 in Fig.9b) in the phasor diagram can be calculated. Further, the modulated reflected light that returns to the iToF camera along light path l1 and l2 has a phase difference to the emitted light that depends on the travelling time (as determined by the travelling distance along light path l1 and l2, and the speed of light). From this phased difference a phasor (922 in Fig.9b) in the phasor diagram can be calculated. Therefore, the above prediction of the arrival times of the reflected light can be used to predict measurement results of both an iToF camera and a dToF camera if the spot (parking spot) is empty. Fig.9a schematically shows an example of predicted photon arrival times and a measured dToF histogram for the case that a parking spot is empty. The dToF histogram 900 depicts the predicted time of arrival (dashed lines) and the measured ToF signal (solid lines) over time. This dToF histogram 900 is measured in a case in which the ToF camera is a direct ToF camera. The ToF histogram 900 depicts a prediction 901 for the arrival time of photons that correspond to a primary reflection (traveled twice the light path l1 in Fig 8) as predicted by the raytracing process of Fig.8. The ToF histogram 900 further depicts a prediction 902 for the arrival time of photons that correspond to a secondary reflection (traveled twice the light paths l1 and l2 in Fig 8) as predicted by the raytracing process of Fig.8. The ToF histogram 900 further depicts a prediction 903 for the arrival time of photons that correspond to a ternary reflection (traveled twice the light paths l1, l2 and l3 in Fig 8) as predicted by the raytracing process of Fig.8. Also depicted in Fig.9a are the captured peaks of a primary reflection 911, a secondary reflection 912 and a tertiary reflection 913 as captured by the ToF camera (110 in Fig.6) in the situation depicted in Fig.6. In the example of Fig.9a, the measured peaks 911, 912, and 913 are located at the respective predicted times of arrival 901, 902, and 903. The fact that, in the example of Fig.9a, the measured peaks 911, 912, and 913 are located at the respective predicted times of arrival 901, 902, and 903 is based on the aspect that, in the situation of Fig.6 to which the histogram of Fig. 9a refers, the parking spot is empty so that the light beams can freely travel between the surfaces of the vehicles (see 102 and 103 in Fig.6) – which is also the assumption made in the raytracing algorithm described in Fig.8. Fig.9b schematically shows an example of predicted phasors and a measured iToF phasor diagram for the case that a parking spot is empty. Depicted is a phasor diagram for the situation in Fig.6 and Fig.8 measured in a case in which the ToF camera is an indirect ToF camera. The iToF phasor diagram 920 depicts the predicted phases (dashed lines) derived from the predicted time of arrival (Fig.8) and a measured phasor (solid arrow) derived from the phase of the iToF signal (measured as in Fig.6). Since the iToF camera can only measure one phase, the measured phase Φ is the phase of a phasor 914 derived by vector addition of the phasors 911 and 912 of the reflected signals of a primary and secondary reflection. The phase 921, as represented by the dashed line, can be predicted from the time delay of a modulated photons signal that is received to the modulated photon signal when it is sent by the iToF camera. The phase 921 corresponds to a primary reflection (traveled twice the light path l1 in Fig 8) as predicted by the raytracing process of Fig.8. The time delay between sending the photons and the arrival time of the photons as well as the frequency of the modulated photon signal sent out by the iToF camera can be converted into the phase 921 which is represented by the dashed line. The dashed line of phase 921 represents the orientation of a phasor if the primary reflection (traveled twice the light path l1 in Fig 8) could be measured independently from the secondary reflection (traveled twice the light paths l1 and l2 in Fig 8). Further, the phase 922, as represented by the dashed line, can be predicted from the time delay of a modulated photons signal that is received to the modulated photon signal when it is send by the iToF camera. The phase 922 corresponds to a secondary reflection (traveled twice the light paths l1 and l2 in Fig 8) as predicted by the raytracing process of Fig.8. The phase 922 can be determined in the same way as phase 921. The dashed line of phase 922 represents the orientation of a phasor if the secondary reflection (traveled twice the light paths l1 and l2 in Fig 8) could be measured independently from the primary reflection (traveled twice the light path l1 in Fig 8). Also depicted in Fig.9b is the measured phasor 914 which can be derived by a vector addition of the phasors 911 and 912 of the primary and secondary reflections in the situation depicted in Fig. 6 if these could be measured separately. Since the phasors 911 and 912 cannot be measured separately they are depicted as dashed arrows and only used to explain the derivation of the measured phasor 914. Further, then phases of 911 and 914 in Fig 9b can be deduced (predicted) from the faces F1 and F2 in Fig 7a. To calculate the position of F1 and F2, one might detect parts that are minimally affected by multipath (e.g., wheels), calculate their position in space and deduce the position of the faces on the basis of e.g., car model. The phasor 915 corresponds to the phasor 912, but its starting point is placed at the endpoint of the phasor 911. The starting point of phasor 911 and the end point of phasor 915 correspond to the starting and end point of the measured phasor 914, signifying the vector addition of the phasors 911 and 912. The phase Φ of the measured phasor 914 has to be assigned to either a measured signal of a primary and secondary reflection as depicted in Fig.6 and Fig.8, when the parking spot is empty. The phasors 911 and 912 are located at the respective predicted phasors 921 and 922. The fact that, in the example of Fig.9b, the phasors 911 and 912 are located at the respective predicted phasors 921 and 922 is based on the aspect that, in the situation of Fig.6 to which the phasor diagram of Fig.9b refers, the parking spot is empty so that the light beams can freely travel between the surfaces of the vehicles (see 102 and 103 in Fig.6) – which is also the assumption made in the raytracing algorithm described in Fig.8. Herein, located at the respective predicted phasors 921 and 922 refers to having the same phase as the respective predicted phasors 921 and 922. In the example of Fig.9b, the measured phasor 914 deviates from the predicted phase 921 of the primary reflection so it can be assumed that the light beam can traverse the light path (107 in Fig. 6 and l2 in Fig.8) freely and that the parking spot is empty. If the light beam could not traverse the light path (107 in Fig.6 and l2 in Fig.8) freely and the parking spot was not empty, the measured phasor 914 would deviate less from the predicted phase 921 as will be discussed with reference to Fig.12b. This deviation can be classified using thresholds or a hypothesis test, e.g., as described with reference to Fig.19. For this method of measurement and evaluation of the light beams, the scattering due to multi- path is more important when the parking spot is free. In the case of an occupied parking spot, the intensity which corresponds to norm of phasor 912 is expected to be higher (inverse square law of traversed distance of the light beam), but the phase difference between phasor 911 and phasor 912 is minimal, because the distance 107 in Fig 11 is lower than distance 107 in Figure 107. The choice of the ROI should be the same as for the case in which a dToF camera is employed (Fig. 9a), with the constraint that a non-specular point must be chosen. The choice of the ROI is of high importance, to detect the signature of a hidden object inside the iToF phasor diagram of the pixels that are most affected by multipath. In fact, 912 is generated not only by l1, but by l2 and l3 and all subsequent reflections. However due to the inverse square law of traversed distance of the light beam the contribution of later reflections becomes smaller and more negligible. Contrastingly, a parking spot detection situation where the potentially empty spot is actually not empty is explained with reference to Fig.10. Fig.10 schematically shows a vehicle with a driver assistance system and ToF camera for parking spot detection when the parking spot is not empty. Depicted is the parking spot detection situation of Fig.1. However, the potentially empty parking spot between the vehicles 102 and 103 is blocked by a vehicle 104 parked further to the front. Neither the driver of vehicle 100 nor the ToF camera 110 have a direct line of sight to the vehicle 104. This is signified by the fact that the line of sight 111, which is tangential to vehicle 102 does not intersect with vehicle 104. Thus, the vehicle 102 blocks the view of every object in the non- visible area 120 above the line of sight 111. This could lead the driver to assume, that the parking spot might be empty, when in fact it is taken by another parked car. Hence, the driver assistance system for parking spot detection needs to determine, that the parking spot is not empty. The situation of the measurement whether the parking spot is empty is depicted in Fig.11. Fig.11 schematically shows a vehicle with a driver assistance system for parking spot detection based on a measurement of multi-path reflections when the parking spot is not empty. The scene of Fig.11 corresponds to that of Fig.6. However, other than in Fig.6, where a parking spot between vehicles 102 and 103 is empty, a vehicle 104 is parked in the parking spot between vehicles 102 and 103. An illumination light beam 106 (primary beam) is reflected at the surface of vehicle 103 in the ROI 115 and generates reflected light beam 107 (secondary beam). The light beam 107 hits vehicle 104 (instead of the vehicle 102 in Fig.6). The light beam 107 is reflected at vehicle 104 and a diffusely reflected part travels back to the ToF camera via the light paths of beams 107 and 106. The distance the light beam diffusely reflected at vehicle 104 travels is shorter than the predicted light paths (see l1 and l2 in Fig.8). Fig.12a schematically shows an example of predicted photon arrival times and a measured dToF histogram for the case that a parking spot is not empty. This dToF histogram 1200 is measured in a case in which the ToF camera is a direct ToF camera. The prediction of the times of arrival in Fig.12a is based on the assumption, that the parking spot is empty and the light beams (light paths l2 and l3 in Fig.8) can freely traverse the empty parking spot. Consequently, the predicted peaks are the same as in Fig.9a. Therefore, the ToF histogram 1200 depicts a prediction 901 for the arrival time of photons that correspond to a primary reflection (traveled twice the light path l1 in Fig 8) as predicted by the raytracing process of Fig.8. The ToF histogram 900 further depicts a prediction 902 for the arrival time of photons that correspond to a secondary reflection (traveled twice the light paths l1 and l2 in Fig 8) as predicted by the raytracing process of Fig.8. The ToF histogram 900 further depicts a prediction 903 for the arrival time of photons that correspond to a ternary reflection (traveled twice the light paths l1, l2 and l3 in Fig 8) as predicted by the raytracing process of Fig. 8. Also depicted in Fig.12a are the captured peaks of a primary reflection 911 and a secondary reflection 912 as captured by the ToF camera (110 in Fig.11) in the situation depicted in Fig.11. The peaks of a primary reflection 911 matches with the prediction 901 for the arrival time of photons. However, the captured peaks of the secondary reflection 912 is measured at an earlier time than the prediction 902 for the arrival time of photons. This is due to the fact that the distance the light beam 107 diffusely reflected at vehicle 104 travels is shorter than the predicted light path (see l1 and l2 in Fig.8). Therefore, it is determined that the parking spot is not empty. This could also be determined of a captured peaks of a tertiary reflection (not shown) to the prediction 903 for the arrival time of photons that correspond to a ternary reflection, or al further reflections if those are predicted and measurable. Further, for the determination of whether the measured peaks are captured at the corresponding predicted times threshold values may be employed. These threshold values may define how much earlier than the corresponding predicted time a measured peak can be captured to still be accepted as a match for the predicted time. These threshold values can also define a symmetric or asymmetric interval around (both directions) the corresponding predicted time. In some embodiments, the performing of the process described above with regard to Fig.12a may be limited to those pixels/histograms that relate to a region of interested identified in the scene according to the principles described with regard to Fig.8 above. That is, only those histograms are evaluated which are of interest for finding NLOS objects related to the task (e.g., determining if a parking spot is empty, or not). In this way, only those histograms may be evaluated which are more likely to contain strong secondary or subsequent peaks. Fig.12b schematically shows an example of predicted phasors and a measured iToF phasor diagram for the case that a parking spot is not empty. Since the iToF camera can only measure one phase, the measured phase Φ is the phase of a phasor 1214 derived by vector addition of the phasors 1211 and 1212 of the reflected signals of a primary and secondary reflection. Depicted is a phasor diagram 1220 for the situation in Fig.10 measured in a case in which the ToF camera is an indirect ToF camera. The prediction of the times of arrival and phases 921 and 922 in Fig.12a is based on the assumption, that the parking spot is empty, and the light beams (light path l2 in Fig.8) can freely traverse the empty parking spot. Consequently, the predicted phases are the same as in Fig.9b. Also depicted in Fig.12b is the measured phasor 1214 which can be derived by a vector addition of the phasors 1211 and 1212 of the primary and secondary reflections in the situation depicted in Fig.10 if these could be measured separately. The measured phasor 1214 only deviates a little from the predicted phase 921 (l1). This is due to the fact that the distance the light beam 107 diffusely reflected at vehicle 104 travels is shorter than the predicted light path (see l1 and l2 in Fig.8). Consequently, the time delay along the light path (106 and 107 in Fig.10) is not as great as it would be in the case where the light beam can traverse the light path (l2 in Fig.8). With a smaller time delay a smaller phase is measured. This is also signified with the phasor 1212 not aligning with the orientation of the predicted phase 922, which is predicted under the assumption, that the parking spot is empty. The phasor 1215 corresponds to the phasor 1212, but its starting point is placed at the endpoint of the phasor 1211. The starting point of phasor 1211 and the end point of phasor 1215 correspond to the starting and end point of the measured phasor 1214 signifying the vector addition of phasors 1211 and 1212. The phase Φ of the measured phasor 1214 has to be assigned to either a measured signal of a primary and secondary reflection, if the parking spot is empty as explained with reference to Fig. 9b, or only a primary reflection overlayed with a reflection on an object in the parking spot shortening the light path to the secondary reflection. In the example of Fig.12b, the measured phasor 1214 only deviates by a small amount from the predicted phase 921 of the primary reflection, thus it can be assumed, that the light beam cannot traverse the light path (107 in Fig.6 and l2 in Fig.8) freely and the parking spot is not empty. This deviation can be classified using thresholds or a hypothesis test, e.g., as described with reference to Fig.19. In some embodiments, the performing of the process described above with regard to Fig.12b may be limited to those pixels/phase diagrams of the iToF camera that relate to a region of interest identified in the scene according to the principles described with regard to Fig.8 above. That is, only those histograms are evaluated which are of interest for finding NLOS objects related to the task (e.g., determining if a parking spot is empty, or not). In this way, only those phase diagrams may be evaluated which are more likely to contain strong secondary or subsequent peaks. Fig.13a schematically shows a flowchart of process of determining a state of a parking spot in the case that a dToF camera is employed. At S1, vehicle models are determined from a 3D model of a scene. This determining of vehicle models may for example be achieved by means of the processes as described with regard to Figs. 7a and 7b. At S2, based on the vehicle models, photon arrival times of multi-path reflections are predicted. This prediction of photon arrival times of multi-path reflections may for example be achieved by means of the processes as described with regard to Fig.8. At S3, positions of peaks in a photon histogram that relate to multi-path reflections are identified. This identifying the positions of peaks in a photon histogram may for example be achieved by means of the processes as described with regard to Fig.9a. At S4, it is determined, based on the predicted times of arrival and the positions of the peaks in the photon histogram, the state of a parking spot (e.g., if the parking spot is occupied or empty). This determining of the state of the parking spot may for example be achieved by means of the processes as described with regard to Fig.12a. It should be noted that, as described with regard to Fig.2 above, the obtaining of a point cloud of a scene at S1 may be based on ToF measurements (e.g., based on depth images, photon histograms, or the like). Alternatively, or in addition, obtaining of a point cloud may be based on information from auxiliary sensors such as RGB images or monochrome images, or the like. Fig.13b schematically shows a flowchart of process of determining a state of a parking spot in the case that an iToF camera is employed. At S1, vehicle models are determined from a 3D model of a scene. This determining of vehicle models may for example be achieved by means of the processes as described with regard to Figs. 7a and 7b. At S2, based on the vehicle models, photon arrival times of multi-path reflections are predicted. This prediction of photon arrival times of multi-path reflections may for example be achieved by means of the processes as described with regard to Fig.8. At S3, phasors in a phase diagram that relate to multi-path reflections are identified. This identifying of the phases of phasors may for example be achieved by means of the processes as described with regard to Fig. 9a. At S4, it is determined, based on the predicted phases and the phases of the phasors in the phase diagram the state of a parking spot (e.g., if the parking spot is occupied or empty). This determining of the state of the parking spot may for example be achieved by means of the processes as described with regard to Fig.12b. It should be noted that, as described with regard to Fig.2 above, the obtaining of a point cloud of a scene at S1 may be based on ToF measurements (e.g., based on depth images, photon histograms, or the like). Alternatively, or in addition, obtaining of a point cloud may be based on information from auxiliary sensors such as RGB images or monochrome images, or the like. Implementation Examples Below, some exemplifying implementation aspects of the disclosure are described. Photon Counting ToF systems A Time-of-Flight (ToF) camera is a range imaging camera system that determines the distance of objects by measuring the time of flight of a light signal between the camera and the object for each point of the image. Generally, a ToF camera has an illumination unit (a LED or VCSEL, Vertical-Cavity Surface-Emitting Laser, EELs Edge Emitting Lasers) that illuminates a scene with modulated light. A pixel array in the ToF camera collects the light reflected from the scene. The indirect ToF (iToF) principle measures phase-shift between the emitted signal and the received signal, which provides information on the travelling time of the light, and hence information on distance. The direct ToF (dToF) method uses a photon counting (PC) technique pulse method in which the pulse width of the laser pulse generated by the lidar system can be changed. By reducing the pulse width, reflections are more easily distinguishable and higher resolution is achieved. Photon counting ToF systems like the dToF system are recording a photon histogram such as described with regard to Figs.9 and 12 above. Current systems are considered to use single photon avalanche diodes (SPADs) as detectors. The bins of the histogram represent the measured distance, i.e., the distance of the object determined by the ToF principle. Each bin is assigned the number of detections ("counts") in the respective bin (i.e., for the corresponding spatial distance). The structure of the distribution (i.e., the "photon histogram" obtained in this way) depends on the real distance of the detected object, on the object itself, as well as on the angle of the object. Further dependencies of the histogram are scattered rays or echoes, which are not directly in the line of sight and might create noise to the signal of the measurement. PC-ToF uses the number of photons that fall into two or more consecutives bins to get sub-bin resolution during depth calculation. Contrary to dToF, the PC-TOF systems are using a relatively small number of bins. The duration of one bin, and the light pulse duration are the same, and could be larger than the one of dToF. A Photon Counting ToF systems (PC-TOF) illuminates the scene with pulses of laser light and the photons of reflected light that are captured by the ToF sensor are evaluated in a photon histogram. The recording period during which a histogram is captured is segmented into recording time slots of typically equal length. Photons that arrive during the same recording time slot (across multiple recording periods) are attributed to a respective “bin” of the photon histogram. The photon counts from multiple of such recording periods are typically aggregated into a single photon histogram. There can be a variable delay between successive recording periods. Fig.14a schematically shows the operation principle of a SPAD photodiode. A SPAD is an avalanche photodiode (APD) that works in a voltage range V beyond the negative breakdown voltage VBD. In this range (indicated by SPAD in Fig.12a), an electron-hole pair produced by a single photon triggers an avalanche effect, as indicated by arrow 1205. The avalanche effect results in a macroscopic current I flowing through the diode. A SPAD photodiode can be used in a dToF Sensor. It is expected that SPAD-based image sensors will allow giga-pixel resolution, possibly using NAPD (Nano-multiplication-region Avalanche Photo Diode) or a variation of this concept such as described by Kang L. Wang et al in "Towards Ultimate Single Photon Counting Imaging CMOS Applications", Workshop for Astronomy and Space Sciences, January 5 and 6, 2011, or by Xinyu Zheng et al in "Modeling and Fabrication of a Nano-multiplication-region Avalanche Photodiode", January 2007, esto.nasa.gov. Usually, the purpose of such a high number of pixels is to achieve the so called “digital film”, where the spatial counting of photons arrivals within an area of the sensor delivers the final pixel value. If the photons are counted temporally, rather than spatially, then the high pixel count of the image sensor can be used to increase the capturing modalities. This means that multi / hyper-spectral acquisition, polarization and even ToF capabilities could be integrated into the same single sensor. A SPAD pixel is a binary device: its output is either 0 (no photon arrived) or 1 (photon arrived). As a consequence, a SPAD pixel cannot measure a continuous intensity value. To achieve non- binary pixel values with SPAD sensors, "photon counting" may be applied which relates to counting of the photons hitting a pixel (or areas of the sensor) over time. Fig.15 schematically provides an example of a process of determining a pixel value according to a "photon counting" approach. At 1401 photons create electron-hole pair on a SPAD pixel and trigger respective avalanche effects that result in macroscopic currents flowing through the diode corresponding to the photon arrivals according to Poisson statistics. At 1402, the number of avalanches produced by the SPAD pixel within predetermined time intervals Dt is counted to determine respective numbers of arrivals for the time intervals Dt. At 1403, the average number of arrivals is determined based on the number of avalanches produced by the SPAD pixel within the predetermined time intervals Dt. At 1404, a pixel value is obtained from the average number of arrivals obtained at 1403. As an alternative to "photon counting", the previously proposed solutions adopt a “spatial counting” approach: the number of SPAD pixel with the status 0 and 1 are counted over a given area of the sensor and these values 0 and 1 are converted into an intensity for the considered area. This approach reduced the effective resolution, as multiple SPAD pixels are used to emulate a single traditional pixel. That is, SPAD sensors permit very high-resolution binary imaging, but this high resolution is sacrificed by spatial counting in order to emulate the continue intensity of traditional pixels, thus limiting the benefits offered by SPAD pixels themselves. Fig.14b schematically shows the basic operational principle of an indirect Time-of-Flight imaging system which can be used for depth sensing. The iToF imaging system 1411 includes an iToF camera with an imaging sensor 1412 having a matrix of pixels and a processor (CPU) 1415. A scene 1417 is actively illuminated with amplitude-modulated infrared light LMS at a predetermined wavelength using an illumination device 1419, for instance with some light pulses of at least one predetermined modulation frequency DML generated by a timing generator 1416. The amplitude-modulated infrared light LMS is reflected from objects within the scene 1417. A lens 1413 collects the reflected light 1419 and forms an image of the objects within the scene 1417 onto the imaging sensor 1412. In indirect Time-of-Flight (iToF) the CPU 1415 determines for each pixel a phase delay between the modulated signal DML and the reflected light RL. Based on these correlations a so called in-phase component value (“I value”) and a so-called quadrature component value (“Q value”) can be determined (see below for a detailed description) for each pixel. Fig.14b describes the principle of a Time-of-Flight imaging system on the example of an indirect Time-of-Flight imaging system. The embodiment described below are, however, not limited to the indirect time Time-of-Flight principle. The depth sensing pixels may also be, for example iToF pixels (CAPD, gated ToF, etc.) or dToF pixels (SPAD) or PC pixels (SPAD) or Dynamic photodiodes (DPD), etc. That is, the ToF pixels may as well be implemented according to the dToF (direct TOF), iToF (indirect ToF), or PC (photon counting) principles. KinectFusion Fig.16 shows an example of implementing a 3D reconstruction. The example follows an approach proposed by R.A. Newcombe et. al. in “KinectFusion: Real-time dense surface mapping and tracking”, 201110th IEEE International Symposium on Mixed and Augmented Reality, 2011, pp.127-136 (also referred to below as “KinectFusion” approach). KinectFusion describes a technology in which a real-time stream of depth maps is received, and a real-time dense SLAM is performed, producing a consistent 3D scene model incrementally while simultaneously tracking the ToF camera’s agile motion using all of the depth data in each frame. A surface measurement 1501 of the ToF data path receives a depth map ^^(^) of the scene (201 in Fig.2) from the ToF camera for each pixel for the current frame ^ to obtain a point cloud represented as a vertex map ^^,^ and normal map ^^,^.The subscript “c” stands for camera coordinates. A pose estimation 1502 of the 3D reconstruction estimates a pose ^^,^ of the sensor based on the point cloud ^^,^, ^^,^ and model feedback
Figure imgf000031_0001
The subscript “g” stands for global coordinates. A model reconstruction 1503 of the 3D reconstruction performs a surface reconstruction update based on the estimated pose ^^ and the depth measurement ^^(^) and provides an updated 3D model ^^ of the scene (201 in Fig.2). A surface prediction 1504 receives the updated model ^^ and determines a dense 3 model surface prediction of the scene (201 in Fig.2) viewed from the currently estimated pose ^^,^ , which yields a model estimated vertex map ^^ ^,^(^) and model estimated normal vector ^^ ^,^ (^) stated in the ToF camera coordinate system of the current frame ^. Surface Measurement The surface measurement 1501 of the ToF Datapath receives a depth map ^^(^) of the scene (201 in Fig.2) from the ToF camera for each pixel for the current frame ^ to obtain a point cloud represented as a vertex map ^^,^ and normal map ^^,^. Each pixel is characterized by its corresponding (2D) image domain coordinates ^ = (^^,^), wherein the depth measurement ^^(^) for each pixel ^ for the current frame ^ combined yields the depth map ^^ for the current frame ^. This yields a vertex map ^^,^(^) for each pixel ^ (i.e., a metric point measurement in the ToF sensor coordinate system of the current frame ^) which is also referred to as the point cloud ^^,^. To the depth measurement ^^(^) a bilateral filter, or any other noise reduction filter known in the state of the art (anisotropic diffusion, nonlocal means, or the like) may be applied before transformation. Further, the measurement 1501 further determines a normal vector ^^,^ (^) for each pixel ^ in a ToF camera coordinate system. Using a camera calibration matrix ^ - which comprises intrinsic camera configuration parameters - each pixel ^ in the image domain coordinates with its according depth measurement ^^(^) is transformed into a three dimensional vertex point ^^,^ (^)
Figure imgf000032_0001
^^,^ , ^^,^ ∈ ℝ within the ToF camera coordinate system corresponding to the current frame ^:
Figure imgf000032_0002
This transformation is applied to each pixel ^ with its according depth measurement ^^(^) for the current frame ^ which yields a vertex map ^^,^ (^) for each pixel ^ (i.e., a metric point measurement in the ToF sensor coordinate system of the current frame ^) which is also referred to as the point cloud ^^,^. Further, the measurement 1501 further determines a normal vector ^^,^ (^) for each pixel ^ in a ToF camera coordinate system. Pose Estimation The pose estimation 1502 of the 3D reconstruction receives the vertex map ^^,^ (^) and the normal vector ^^,^ (^) for each pixel ^ in the camera coordinate system corresponding to the current frame ^, and a model estimation for the vertex map ^^ ^ି^,^(^) and a model estimation for the normal vector ^^ ^,^(^) for each pixel ^ from surface prediction 1504 (see below) based on the latest available model updated of the previous frame ^ − 1. In another embodiment the pose estimation may be based directly on the model ^^ from which all points and all normals may be received by resampling. Further, the pose estimation 1502 obtains an estimated pose
Figure imgf000032_0003
for the last frame ^ − 1 from a storage. In another embodiment more than one past pose may be used. For example, in a SLAM pipeline a separate (or “backend”) thread is available that does online bundle adjustment and/or pose graph optimization in order to leverage all past poses. Then the pose estimation estimates a pose ^^,^ for the current frame ^. The pose of the ToF camera describes the position and the orientation of the ToF system, which is described by 6 degrees-of-freedom (6DOF), that is three DOF for the position and three DOF for the orientation. The three positional DOF are forward/back, up/down, left/right and the three orientational DOF are yaw, pitch, and roll. The current pose of the ToF camera at frame ^ can be represented by a rigid body transformation, which is defined by a pose matrix ^^,^:
Figure imgf000033_0001
wherein ^^ ∈ ℝଷ௫ଷ is the matrix representing the rotation of the ToF camera and ^^ ∈ ℝଷ௫^ the vector representing the translation of the ToF camera from the origin, wherein they are denoted in a global coordinate system. ^^(3) denotes the so called special Euclidean group of dimension three. The pose estimation is performed based on the vertex map ^^,^(^) and the normal vector ^^,^ (^) for each pixel ^ of the current frame ^ and a model estimation for the vertex map ^^ି^,^(^) and a model estimation for the normal vector ^^,^ (^) for each pixel ^ based on the latest available model updated to the previous frame ^ − 1. In another embodiment the model ^^ is used directly, especially if it is a mesh model, for example by resampling the mesh. Still further, it is based on the estimated pose
Figure imgf000033_0002
for the last frame ^ − 1. The pose estimation estimates the pose ^^,^ for the current frame ^ based on an iterative closest point (ICP) algorithm as it is explained in the above cited “KinectFusion” paper. With the estimated pose ^^,^ for the current frame ^, a vertex map ^^,^(^) of the current frame ^ can be transformed into the global coordinate system which yields the global vertex map ^^,^(^): ^^,^(^) = ^^ ⋅ ^^,^(^) + ^^ (Eq.4) When this is performed for all pixels ^ it yields a registered point cloud ^^,^ . Accordingly, the normal vector ^^,^(^) for each pixel ^ of the current frame ^ can be transformed into the global coordinate system:
Figure imgf000033_0003
Model reconstruction (Surface reconstruction update) The 3D model of the scene (201 in Fig.2) can be reconstructed for example based on volumetric truncated signed distance functions (TSDFs) or other models as described below. The TSDF based volumetric surface representation represents the 3D scene (201 in Fig.2) within a volume ^^^ as a voxel grid in which the TSDF model stores for each voxel ^ the signed distance to the nearest surface. The volume ^^^ is represented by a grid of equally sized voxels which are characterized by its center ^ ∈ ℝ^. The voxel ^ (i.e., its center) is given in global coordinates. The value of the TSDF at a voxel ^ corresponds to the signed distance to the closest zero crossing (which is the surface interface of the scene 201 in Fig.2), taking on positive and increasing values moving from the visible surface of the scene (201 in Fig.2) into free space, and negative and decreasing values on the non-visible side of the scene 201, wherein the function is truncated when the distance from the surface surpasses a certain distance. The result of iteratively fusing (averaging) TSDF’s of multiple 3D registered point clouds (of multiple frames) of the same scene 201 into a global 3D model yields a global TSDF model ^^ which contains a fusion of the frames 1, .. , ^ for the scene 201. The global TSDF model ^^ is described by two values for each voxel ^ within the volume ^^^, i.e. the actual TSDF function ^^(^) which describes the distance to the nearest surface and an uncertainty weight ^^(^) which assesses the uncertainty of ^^(^), that is ^^ ≔ [^^ (^), ^^ (^)]. The global TSDF model ^^ for the scene 201 is built iteratively and depth map ^^ of the scene 201 with the corresponding pose estimation ^^,^ and the of a current frame ^ is integrated and fused into the previous global TSDF model ^^ି^ of the scene 201, such that the global TSDF model ^^ି^ ≔ [^^ି^(^), ^^ି^(^)] is updated - and thereby improved - by the registered point cloud ^^,^of the current frame ^. Therefore, the model reconstruction receives the depth map ^^ of the current frame ^ and the current estimated pose ^^ (which yields the registered point cloud ^^,^ of the current frame ^) and outputs an updated global TSDF model ^^ = [^^ (^), ^^ (^)]. That means the updated global TSDF model ^^ = [^^ (^), ^^ (^)] is based on the previous global TSDF model ^^ି^ = [^^ି^ (^), ^^ି^ (^)] and on the current registered point cloud ^^,^ . According to the above cited “KinectFusion” paper this is determined as:
Figure imgf000034_0001
^൫^,^^,^൯ ∝ cos(^) /^^(^) wherein the function ^ = ^(^) performs perspective projection of ^ ∈ ℝ including de- homogenization to obtain ^ ∈ ℝ and, where ^ is the angle between the associated pixel ray direction and the surface normal measurement ^^,^ . TSDFs are for example also described in more detail in the KinectFusion paper cited above. Still further, the model reconstruction 1503 may receive a model feedback (for example a model feedback matrix ^^, see below) which indicates for each pixel if it is reliable (overlap pixel in case that overlap is sufficient and in case that overlap is not sufficient), unreliable (non-overlap pixel in case that overlap is sufficient) or new (non-overlap pixel in case that overlap is not sufficient). The depth data of a reliable or new pixel may be used to improve the 3 model as described above (that means the model is created or updated with the corresponding depth measurement), the depth data of an unreliable pixel may be discarded or stored to a dedicated buffer that can be used or not. Vehicle control system When implementing a ToF camera system in a vehicle it is beneficial to include the ToF camera system in the standard vehicle control system. This enables a sharing of computer resources and hardware components between multiple systems. The technology according to an embodiment of the present disclosure is applicable to various products. The techniques of the embodiments may for example be used for driving assistance systems. For example, the technology according to an embodiment of the present disclosure may be implemented as a device included in a mobile body that is any of kinds of automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility vehicles, airplanes, drones, ships, robots, construction machinery, agricultural machinery (tractors), and the like. FIG.17 is a block diagram depicting an example of schematic configuration of a vehicle control system 7000 as an example of a mobile body control system to which the technology according to embodiments of the present disclosure can be applied. The vehicle control system 7000 includes a plurality of electronic control units connected to each other via a communication network 7010. In the example depicted in FIG.17, the vehicle control system 7000 includes a driving system control unit 7100, a body system control unit 7200, a battery control unit 7300, an outside-vehicle information detecting unit 7400, an in-vehicle information detecting unit 7500, and an integrated control unit 7600. The communication network 7010 connecting the plurality of control units to each other may, for example, be a vehicle-mounted communication network compliant with an arbitrary standard such as controller area network (CAN), local interconnect network (LIN), local area network (LAN), FlexRay (registered trademark), or the like. Each of the control units includes: a microcomputer that performs arithmetic processing according to various kinds of programs; a storage section that stores the programs executed by the microcomputer, parameters used for various kinds of operations, or the like; and a driving circuit that drives various kinds of control target devices. Each of the control units further includes: a network interface (I/F) for performing communication with other control units via the communication network 7010; and a communication I/F for performing communication with a device, a sensor, or the like within and without the vehicle by wire communication or radio communication. A functional configuration of the integrated control unit 7600 illustrated in FIG. 17 includes a microcomputer 7610, a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon receiving section 7650, an in- vehicle device I/F 7660, a sound/image output section 7670, a vehicle-mounted network I/F 7680, and a storage section 7690. The other control units similarly include a microcomputer, a communication I/F, a storage section, and the like. The driving system control unit 7100 controls the operation of devices related to the driving system of the vehicle in accordance with various kinds of programs. For example, the driving system control unit 7100 functions as a control device for a driving force generating device for generating the driving force of the vehicle, such as an internal combustion engine, a driving motor, or the like, a driving force transmitting mechanism for transmitting the driving force to wheels, a steering mechanism for adjusting the steering angle of the vehicle, a braking device for generating the braking force of the vehicle, and the like. The driving system control unit 7100 may have a function as a control device of an antilock brake system (ABS), electronic stability control (ESC), or the like. The driving system control unit 7100 is connected with a vehicle state detecting section 7110. The vehicle state detecting section 7110, for example, includes at least one of a gyro sensor that detects the angular velocity of axial rotational movement of a vehicle body, an acceleration sensor that detects the acceleration of the vehicle, and sensors for detecting an amount of operation of an accelerator pedal, an amount of operation of a brake pedal, the steering angle of a steering wheel, an engine speed or the rotational speed of wheels, and the like. The driving system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detecting section 7110, and controls the internal combustion engine, the driving motor, an electric power steering device, the brake device, and the like. Both the driving system control unit 7100 and the vehicle state detecting section 7110 can control the operation of devices related to the driving system of the vehicle based on information measured by the ToF camera system. The body system control unit 7200 controls the operation of various kinds of devices provided to the vehicle body in accordance with various kinds of programs. For example, the body system control unit 7200 functions as a control device for a keyless entry system, a smart key system, a power window device, or various kinds of lamps such as a headlamp, a backup lamp, a brake lamp, a turn signal, a fog lamp, or the like. In this case, radio waves transmitted from a mobile device as an alternative to a key or signals of various kinds of switches can be input to the body system control unit 7200. The body system control unit 7200 receives these input radio waves or signals, and controls a door lock device, the power window device, the lamps, or the like of the vehicle. The battery control unit 7300 controls a secondary battery 7310, which is a power supply source for the driving motor, in accordance with various kinds of programs. For example, the battery control unit 7300 is supplied with information about a battery temperature, a battery output voltage, an amount of charge remaining in the battery, or the like from a battery device including the secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals and performs control for regulating the temperature of the secondary battery 7310 or controls a cooling device provided to the battery device or the like. The outside-vehicle information detecting unit 7400 detects information about the outside of the vehicle including the vehicle control system 7000. For example, the outside-vehicle information detecting unit 7400 is connected with at least one of an imaging section 7410 and an outside- vehicle information detecting section 7420. The imaging section 7410 includes at least one of a time-of-flight (ToF) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras. The outside-vehicle information detecting section 7420, for example, includes at least one of an environmental sensor for detecting current atmospheric conditions or weather conditions and a peripheral information detecting sensor for detecting another vehicle, an obstacle, a pedestrian, or the like on the periphery of the vehicle including the vehicle control system 7000. The outside-vehicle information detecting unit 7400 including the imaging section 7410 may for example comprise a ToF camera system and Datapath as described with regard to Fig.2 above. The environmental sensor, for example, may be at least one of a rain drop sensor detecting rain, a fog sensor detecting a fog, a sunshine sensor detecting a degree of sunshine, and a snow sensor detecting a snowfall. The peripheral information detecting sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR device (Light detection and Ranging device, or Laser imaging detection and ranging device). Each of the imaging section 7410 and the outside- vehicle information detecting section 7420 may be provided as an independent sensor or device or may be provided as a device in which a plurality of sensors or devices are integrated. FIG.18 depicts an example of installation positions of the imaging section 7410 and the outside- vehicle information detecting section 7420. Imaging sections 7910, 7912, 7914, 7916, and 7918 are, for example, disposed at least one of positions on a front nose, sideview mirrors, a rear bumper, and a back door of the vehicle 7900 and a position on an upper portion of a windshield within the interior of the vehicle. The imaging section 7910 provided to the front nose and the imaging section 7918 provided to the upper portion of the windshield within the interior of the vehicle obtain mainly an image of the front of the vehicle 7900. The imaging sections 7912 and 7914 provided to the sideview mirrors obtain mainly an image of the sides of the vehicle 7900. The imaging section 7916 provided to the rear bumper, or the back door obtains mainly an image of the rear of the vehicle 7900. The imaging section 7918 provided to the upper portion of the windshield within the interior of the vehicle is used mainly to detect a preceding vehicle, a pedestrian, an obstacle, a signal, a traffic sign, a lane, or the like. Incidentally, FIG.18 depicts an example of photographing ranges of the respective imaging sections 7910, 7912, 7914, and 7916. An imaging range a represents the imaging range of the imaging section 7910 provided to the front nose. Imaging ranges b and c respectively represent the imaging ranges of the imaging sections 7914 and 7912 provided to the sideview mirrors. An imaging range d represents the imaging range of the imaging section 7916 provided to the rear bumper or the back door. A bird’s-eye image of the vehicle 7900 as viewed from above can be obtained by superimposing image data imaged by the imaging sections 7910, 7912, 7914, and 7916, for example. Outside-vehicle information detecting sections 7920, 7922, 7924, 7926, 7928, and 7930 provided to the front, rear, sides, and corners of the vehicle 7900 and the upper portion of the windshield within the interior of the vehicle may be, for example, an ultrasonic sensor or a radar device. The outside-vehicle information detecting sections 7920, 7926, and 7930 provided to the front nose of the vehicle 7900, the rear bumper, the back door of the vehicle 7900, and the upper portion of the windshield within the interior of the vehicle may be a LIDAR device or a ToF camera system, for example. These outside-vehicle information detecting sections 7920 to 7930 are used mainly to detect a preceding vehicle, a pedestrian, an obstacle, or the like. Returning to FIG.17, the description will be continued. The outside-vehicle information detecting unit 7400 makes the imaging section 7410 image an image of the outside of the vehicle and receives imaged image data. In addition, the outside-vehicle information detecting unit 7400 receives detection information from the outside-vehicle information detecting section 7420 connected to the outside-vehicle information detecting unit 7400. In a case where the outside- vehicle information detecting section 7420 is an ultrasonic sensor, a radar device, a LIDAR device or a ToF camera system, the outside-vehicle information detecting unit 7400 transmits an ultrasonic wave, a radar wave, or the like, and receives information of a received reflected wave. On the basis of the received information, the outside-vehicle information detecting unit 7400 may perform processing of detecting an object such as a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto. The outside- vehicle information detecting unit 7400 may perform environment recognition processing of recognizing a rainfall, a fog, road surface conditions, or the like on the basis of the received information. The outside-vehicle information detecting unit 7400 may calculate a distance to an object outside the vehicle on the basis of the received information. In addition, on the basis of the received image data, the outside-vehicle information detecting unit 7400 may perform image recognition processing of recognizing a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto. The outside-vehicle information detecting unit 7400 may subject the received image data to processing such as distortion correction, alignment, or the like, and combine the image data imaged by a plurality of different imaging sections 7410 to generate a bird’s-eye image or a panoramic image. The outside-vehicle information detecting unit 7400 may perform viewpoint conversion processing using the image data imaged by the imaging section 7410 including the different imaging parts. The in-vehicle information detecting unit 7500 detects information about the inside of the vehicle. The in-vehicle information detecting unit 7500 is, for example, connected with a driver state detecting section 7510 that detects the state of a driver. The driver state detecting section 7510 may include a camera that images the driver, a biosensor that detects biological information of the driver, a microphone that collects sound within the interior of the vehicle, or the like. The biosensor is, for example, disposed in a seat surface, the steering wheel, or the like, and detects biological information of an occupant sitting in a seat or the driver holding the steering wheel. On the basis of detection information input from the driver state detecting section 7510, the in- vehicle information detecting unit 7500 may calculate a degree of fatigue of the driver or a degree of concentration of the driver or may determine whether the driver is dozing. The in- vehicle information detecting unit 7500 may subject an audio signal obtained by the collection of the sound to processing such as noise canceling processing or the like. The integrated control unit 7600 controls general operation within the vehicle control system 7000 in accordance with various kinds of programs. The integrated control unit 7600 is connected with an input section 7800. The input section 7800 is implemented by a device capable of input operation by an occupant, such, for example, as a touch panel, a button, a microphone, a switch, a lever, or the like. The integrated control unit 7600 may be supplied with data obtained by voice recognition of voice input through the microphone. The input section 7800 may, for example, be a remote-control device using infrared rays or other radio waves, or an external connecting device such as a mobile telephone, a personal digital assistant (PDA), or the like that supports operation of the vehicle control system 7000. The input section 7800 may be, for example, a camera. In that case, an occupant can input information by gesture. Alternatively, data may be input which is obtained by detecting the movement of a wearable device that an occupant wears. Further, the input section 7800 may, for example, include an input control circuit or the like that generates an input signal on the basis of information input by an occupant or the like using the above-described input section 7800, and which outputs the generated input signal to the integrated control unit 7600. An occupant or the like inputs various kinds of data or gives an instruction for processing operation to the vehicle control system 7000 by operating the input section 7800. The storage section 7690 may include a read only memory (ROM) that stores various kinds of programs executed by the microcomputer and a random-access memory (RAM) that stores various kinds of parameters, operation results, sensor values, or the like. In addition, the storage section 7690 may be implemented by a magnetic storage device such as a hard disc drive (HDD) or the like, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like. The general-purpose communication I/F 7620 is a communication I/F used widely, which communication I/F mediates communication with various apparatuses present in an external environment 7750. The general-purpose communication I/F 7620 may implement a cellular communication protocol such as global system for mobile communications (GSM (registered trademark)), worldwide interoperability for microwave access (WiMAX (registered trademark)), long term evolution (LTE (registered trademark)), LTE-advanced (LTE-A), or the like, or another wireless communication protocol such as wireless LAN (referred to also as wireless fidelity (Wi-Fi (registered trademark)), Bluetooth (registered trademark), or the like. The general-purpose communication I/F 7620 may, for example, connect to an apparatus (for example, an application server or a control server) present on an external network (for example, the Internet, a cloud network, or a company-specific network) via a base station or an access point. In addition, the general-purpose communication I/F 7620 may connect to a terminal present in the vicinity of the vehicle (which terminal is, for example, a terminal of the driver, a pedestrian, or a store, or a machine type communication (MTC) terminal) using a peer to peer (P2P) technology, for example. The dedicated communication I/F 7630 is a communication I/F that supports a communication protocol developed for use in vehicles. The dedicated communication I/F 7630 may implement a standard protocol such, for example, as wireless access in vehicle environment (WAVE), which is a combination of institute of electrical and electronic engineers (IEEE) 802.11p as a lower layer and IEEE 1609 as a higher layer, dedicated short range communications (DSRC), or a cellular communication protocol. The dedicated communication I/F 7630 typically carries out V2X communication as a concept including one or more of communication between a vehicle and a vehicle (Vehicle to Vehicle), communication between a road and a vehicle (Vehicle to Infrastructure), communication between a vehicle and a home (Vehicle to Home), and communication between a pedestrian and a vehicle (Vehicle to Pedestrian). The positioning section 7640, for example, performs positioning by receiving a global navigation satellite system (GNSS) signal from a GNSS satellite (for example, a GPS signal from a global positioning system (GPS) satellite), and generates positional information including the latitude, longitude, and altitude of the vehicle. Incidentally, the positioning section 7640 may identify a current position by exchanging signals with a wireless access point or may obtain the positional information from a terminal such as a mobile telephone, a personal handy phone system (PHS), or a smart phone that has a positioning function. The beacon receiving section 7650, for example, receives a radio wave or an electromagnetic wave transmitted from a radio station installed on a road or the like, and thereby obtains information about the current position, congestion, a closed road, a necessary time, or the like. Incidentally, the function of the beacon receiving section 7650 may be included in the dedicated communication I/F 7630 described above. The in-vehicle device I/F 7660 is a communication interface that mediates connection between the microcomputer 7610 and various in-vehicle devices 7760 present within the vehicle. The in- vehicle device I/F 7660 may establish wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), near field communication (NFC), or wireless universal serial bus (WUSB). In addition, the in-vehicle device I/F 7660 may establish wired connection by universal serial bus (USB), high-definition multimedia interface (HDMI (registered trademark)), mobile high-definition link (MHL), or the like via a connection terminal (and a cable if necessary) not depicted in the figures. The in-vehicle devices 7760 may, for example, include at least one of a mobile device and a wearable device possessed by an occupant and an information device carried into or attached to the vehicle. The in-vehicle devices 7760 may also include a navigation device that searches for a path to an arbitrary destination. The in-vehicle device I/F 7660 exchanges control signals or data signals with these in-vehicle devices 7760. The vehicle-mounted network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The vehicle-mounted network I/F 7680 transmits and receives signals or the like in conformity with a predetermined protocol supported by the communication network 7010. The microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 in accordance with various kinds of programs on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680. For example, the microcomputer 7610 may calculate a control target value for the driving force generating device, the steering mechanism, or the braking device on the basis of the obtained information about the inside and outside of the vehicle and output a control command to the driving system control unit 7100. For example, the microcomputer 7610 may perform cooperative control intended to implement functions of an advanced driver assistance system (ADAS) which functions include collision avoidance or shock mitigation for the vehicle, following driving based on a following distance, vehicle speed maintaining driving, a warning of collision of the vehicle, a warning of deviation of the vehicle from a lane, or the like. In addition, the microcomputer 7610 may perform cooperative control intended for automatic driving, which makes the vehicle to travel autonomously without depending on the operation of the driver, or the like, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of the obtained information about the surroundings of the vehicle. The microcomputer 7610 may generate three-dimensional distance information between the vehicle and an object such as a surrounding structure, a person, or the like, and generate local map information including information about the surroundings of the current position of the vehicle, on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680. Three-dimensional distance information may also be generated based on measurements of a ToF camera system. In addition, the microcomputer 7610 may predict danger such as collision of the vehicle, approaching of a pedestrian or the like, an entry to a closed road, or the like on the basis of the obtained information, and generate a warning signal. The warning signal may, for example, be a signal for producing a warning sound or lighting a warning lamp. The microcomputer 7610 may for example carry out the processes described in the embodiments above, e.g., the steps described in Fig.13. This can be done on the basis of the measurement data of the ToF camera system that is an example of the outside-vehicle information detecting unit 7400 including the imaging section 7410. The sound/image output section 7670 transmits an output signal of at least one of a sound and an image to an output device capable of visually or auditorily notifying information to an occupant of the vehicle or the outside of the vehicle. In the example of FIG.17, an audio speaker 7710, a display section 7720, and an instrument panel 7730 are illustrated as the output device. The display section 7720 may, for example, include at least one of an on-board display and a head-up display. The display section 7720 may have an augmented reality (AR) display function. The output device may be other than these devices, and may be another device such as headphones, a wearable device such as an eyeglass type display worn by an occupant or the like, a projector, a lamp, or the like. In a case where the output device is a display device, the display device visually displays results obtained by various kinds of processing performed by the microcomputer 7610 or information received from another control unit in various forms such as text, an image, a table, a graph, or the like. In addition, in a case where the output device is an audio output device, the audio output device converts an audio signal constituted of reproduced audio data or sound data or the like into an analog signal, and auditorily outputs the analog signal. Incidentally, at least two control units connected to each other via the communication network 7010 in the example depicted in FIG.17 may be integrated into one control unit. Alternatively, each individual control unit may include a plurality of control units. Further, the vehicle control system 7000 may include another control unit not depicted in the figures. In addition, part or the whole of the functions performed by one of the control units in the above description may be assigned to another control unit. That is, predetermined arithmetic processing may be performed by any of the control units as long as information is transmitted and received via the communication network 7010. Similarly, a sensor or a device connected to one of the control units may be connected to another control unit, and a plurality of control units may mutually transmit and receive detection information via the communication network 7010. Incidentally, a computer program for realizing the functions of a parking spot detection can be implemented in one of the control units or the like. In addition, a computer readable recording medium storing such a computer program can also be provided. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. In addition, the above-described computer program may be distributed via a network, for example, without the recording medium being used. In the vehicle control system 7000 described above, the functions of a parking spot detection according to the embodiments described above can be applied to the integrated control unit 7600 in the application example depicted in FIG.17. Fig.19 schematically shows a diagram for binary Bayesian hypothesis testing to classify a measurement result in the case of an iToF measurement. Depicted is a diagram of probability over phase with the probability density function 1901 and 1902, phase 1903 as the central value of the probability density functions 1901, phase 1905 as the central value probability density functions 1902, and the phase 1904 of a measured phasor in an iToF phase diagram (e.g., Fig.9b or 12b). The phase increases from left to right. Thus, the probability density function 1901 is a probability density function for a hypothesis H0 that the parking spot is not empty (is occupied) which means that the measured iToF signal does not comprise secondary reflection as described with reference to Fig.12b, or the secondary reflections has a minimal impact on the actual measurement of the phase, meaning that the phase of light path (l1 + l2 in Fig.8) is very close to that of light path (l1 in Fig.8). The probability density function 1901 represents the hypothesis that the measured light has traveled the light path (l1 in Fig.8) of the primary reflection. With an earlier arrival time a smaller phase is expected in the case of a non-empty parking spot. The probability density function 1902 is a probability density function for a hypotheses H1 that the parking spot is empty which means that the measured iToF signal does comprise phase due to the vector product of the phasors of the primary and secondary reflection as described with reference to Fig.12b. The probability density function 1902 represents the hypothesis that the measured light has traveled the light path (l1 and l2 in Fig.8) of the primary and secondary reflection. The probability density functions 1901 and 1902 can be predetermined or consistently amended by the system. They are for example derived from measurement uncertainties of the system which propagate into the creation of the model and thus create uncertainties of the position of the reflecting surfaces and thus create probability density functions of the lengths of the traversed light paths and measured time delays or phases. One way to select which hypotheses (probability density function) the measured phase 1904 most likely belongs to, is the maximum a posteriori test. Here the posterior probabilities of the phase 1904 being a value of the either hypothesis are compared and the hypothesis with the greater posteriori probability is selected. If the hypothesis H0 is the hypothesis of the probability density function 1901, the hypothesis H1 is the hypothesis of the probability density function 1902, and Φm is the phase 1904, then the posteriori probability of H0 is:
Figure imgf000045_0001
and the posteriori probability of H1 is:
Figure imgf000045_0002
Wherein
Figure imgf000045_0003
is the probability of ^^ being measured if the hypothesis H0 is true, i.e. the value of the probability density function 1901 at the phase 1904, ^(^0) is the prior probability of H0, ^(^^) is the probability of ^^ being measured, ^(^^|^1) is the probability of ^^ being measured if the hypothesis H1 is true, i.e. the value of the probability density function 1902 at the phase 1904, and ^(^1) is the prior probability of H1. As a result, H0 is chosen if: ^(^0|^ = ^^) ≥ ^(^1|^ = ^^) (Eq: 14) or to simplify, H0 is chosen if: ^(^^|^0)^(^0) ≥ ^(^^|^1)^(^1) (Eq: 15) If H0 is not chosen H1 is chosen. This classification can be used instead of the thresholds which have been discussed with reference to Fig.12a. The phase 1904 is the phase of a measured phasor (914 in Fig.9b and 1214 in Fig.12b), by an iToF system (e.g., Fig.9b or 12b). Thus, commonly known binary Bayesian hypothesis testing is used to classify which probability density function the phasor of phase 1904 most likely belongs to, and thus classify whether the parking spot is empty or not. Also, a minimum cost hypothesis test can be employed. This classification can be used instead of the thresholds which have been discussed with reference to Fig.12b. *** It should be recognized that the embodiments describe methods with an exemplary ordering of method steps. The specific ordering of method steps is however given for illustrative purposes only and should not be construed as binding. It should be noted that the present disclosure is not limited to any specific division of functions in specific units. The method can also be implemented as a computer program causing a computer and/or a processor, such as microcomputer 7610 discussed above, to perform the method, when being carried out on the computer and/or processor. In some embodiments, also a non-transitory computer-readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the method described to be performed. All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software. A processing system as described above can for example be implemented by a respective programmed processor, field programmable gate array (FPGA) and the like. In so far as the embodiments of the disclosure described above are implemented, at least in part, using software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a computer program is provided are envisaged as aspects of the present disclosure. Note that the present technology can also be configured as described below. (1) An electronic device (200) comprising circuitry configured to detect a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor (110) with model-based information obtained from a model-based prediction of light reflection. (2) The electronic device (200) of (1), wherein the circuitry is configured to detect at least one of a presence, a position, and a velocity of the non-line of sight object (110). (3) The electronic device (200) of (1) or (2), wherein the circuitry is configured to determine the state of a spot based on the detection of the non-line of sight object. (4) The electronic device (200) of (3), wherein the state of the spot comprises information on whether or not the spot is empty. (5) The electronic device (200) of (3) or (4), wherein the spot is a parking spot. (6) The electronic device (200) of any one of (1) to (5), wherein the ToF information comprises information on photon arrival times related to multi-path reflections. (7) The electronic device (200) of any one of (1) to (6), wherein the circuitry is configured to obtain the model-based information by a raytracing process. (8) The electronic device (200) of any one of (1) to (7), wherein the model-based information comprises predicted photon arrival times. (9) The electronic device (200) of (8), wherein the predicted photon arrival times relate to at least one of secondary, tertiary, and higher order light reflections. (10) The electronic device (200) of any one of (1) to (9), wherein the model-based information comprises predictions on multi-path light reflections. (11) The electronic device (200) of any one of (1) to (10), wherein the circuitry is configured to determine if the ToF information obtained from reflected light deviates from the model-based information. (12) The electronic device (200) of any one of (1) to (11), wherein the circuitry is configured to determine if the position of a reflection indicated by the ToF information deviates from a model-based prediction. (13) The electronic device (200) of any one of (1) to (12), wherein the circuitry is configured to perform the model-based prediction of light reflection based on a reconstructed 3D model of a captured scene (301). (14) The electronic device (200) of any one of (1) to (13), wherein the circuitry is configured to perform the model-based prediction of light reflection based on one or more vehicle models (VM). (15) The electronic device (200) of (14), wherein a vehicle model (VM) models parts of a vehicle (102, 103) that are not visible to the ToF imaging sensor (110) in imaging information obtained from primary reflections. (16) The electronic device (200) of (14) or (15), wherein the circuitry is configured to determine a vehicle model (VM) based on a 3D model of a scene (301). (17) The electronic device (200) of (16), wherein a vehicle model (VM) models parts of a vehicle (102, 103) that are not present in the 3D model of the scene (301). (18) The electronic device (200) of any one of (1) to (17), wherein the ToF imaging sensor (110) is a dToF imaging sensor. (19) The electronic device (200) of (18), wherein the circuitry is configured to obtain the ToF information from a photon histogram captured by the ToF imaging sensor (110). (20) The electronic device (200) of any one of (1) to (17), wherein the ToF imaging sensor (110) is an iToF imaging sensor. (21) The electronic device (200) of (20), wherein the circuitry is configured to obtain the ToF information from a phase captured by the ToF imaging sensor (110). (22) The electronic device (200) of (20), wherein the non-line of sight object is detected based on the phase captured by the ToF imaging and a predicted phase. (23) The electronic device (200) of (22), wherein the predicted phase is predicted based on model-based information obtained from a model-based prediction of multi-path light reflections. (24) The electronic device (200) of any one of (1) to (23), wherein the circuitry is configured to emit light, and to obtain the ToF information from at least one of secondary, tertiary, and higher order reflections of this emitted light. (25) The electronic device (200) of any one of (1) to (24), wherein the circuitry is configured to emit light, and wherein the circuitry is configured to model the light path of this emitted light in order to obtain the model-based prediction of light reflection. (26) A method comprising detecting a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor (110) with model- based information obtained from a model-based prediction of light reflection. (27) A computer program comprising instructions which are configured to, when executed on a processor, perform the method of (26).

Claims

1 CLAIMS 1. An electronic device comprising circuitry configured to detect a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. 2. The electronic device of claim 1, wherein the circuitry is configured to obtain the ToF information from a photon histogram captured by the ToF imaging sensor. 3. The electronic device of claim 1, wherein the circuitry is configured to detect at least one of a presence, a position, and a velocity of the non-line of sight object. 4. The electronic device of claim 1, wherein the circuitry is configured to determine the state of a spot based on the detection of the non-line of sight object. 5. The electronic device of claim 4, wherein the state of the spot comprises information on whether or not the spot is empty. 6. The electronic device of claim 4, wherein the spot is a parking spot. 7. The electronic device of claim 1, wherein the ToF information comprises information on photon arrival times related to multi-path reflections. 8. The electronic device of claim 1, wherein the circuitry is configured to obtain the model- based information by a raytracing process. 9. The electronic device of claim 1, wherein the model-based information comprises predicted photon arrival times. 10. The electronic device of claim 9, wherein the predicted photon arrival times relate to at least one of secondary, tertiary, and higher order light reflections. 11. The electronic device of claim 1, wherein the model-based information comprises predictions on multi-path light reflections. 12. The electronic device of claim 1, wherein the circuitry is configured to determine if the ToF information obtained from reflected light deviates from the model-based information. 13. The electronic device of claim 1, wherein the circuitry is configured to determine if the position of a reflection indicated by the ToF information deviates from a model-based prediction. 2 14. The electronic device of claim 1, wherein the circuitry is configured to perform the model-based prediction of light reflection based on a reconstructed 3D model of a captured scene. 15. The electronic device of claim 1, wherein the circuitry is configured to perform the model-based prediction of light reflection based on one or more vehicle models. 16. The electronic device of claim 15, wherein a vehicle model models parts of a vehicle that are not visible to the ToF imaging sensor in imaging information obtained from primary reflections. 17. The electronic device of claim 15, wherein the circuitry is configured to determine a vehicle model based on a 3D model of a scene. 18. The electronic device of claim 17, wherein a vehicle model models parts of a vehicle that are not present in the 3D model of the scene. 19. The electronic device of claim 1, wherein the circuitry is configured to emit light, and to obtain the ToF information from at least one of secondary, tertiary, and higher order reflections of this emitted light. 20. The electronic device of claim 1, wherein the circuitry is configured to emit light, and wherein the circuitry is configured to model the light path of this emitted light in order to obtain the model-based prediction of light reflection. 21. A method comprising detecting a non-line of sight object based on a comparison of ToF information obtained from reflected light received by a ToF imaging sensor with model-based information obtained from a model-based prediction of light reflection. 22. A computer program comprising instructions which are configured to, when executed on a processor, perform the method of claim 21.
PCT/EP2023/083378 2022-11-28 2023-11-28 Electronic device and method WO2024115493A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP22209931.9 2022-11-28

Publications (1)

Publication Number Publication Date
WO2024115493A1 true WO2024115493A1 (en) 2024-06-06

Family

ID=

Similar Documents

Publication Publication Date Title
US10445928B2 (en) Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
CN108572663B (en) Target tracking
CN108693876B (en) Object tracking system and method for vehicle with control component
JP6984215B2 (en) Signal processing equipment, and signal processing methods, programs, and mobiles.
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
US11527084B2 (en) Method and system for generating a bird's eye view bounding box associated with an object
WO2017159382A1 (en) Signal processing device and signal processing method
CN111886626A (en) Signal processing apparatus, signal processing method, program, and moving object
US20210010814A1 (en) Robust localization
CN115244427A (en) Simulated laser radar apparatus and system
WO2017177651A1 (en) Systems and methods for side-directed radar from a vehicle
CN110691986B (en) Apparatus, method, and non-transitory computer-readable recording medium for computer vision
WO2019026715A1 (en) Control device, control method, program, and mobile unit
JP2023126642A (en) Information processing device, information processing method, and information processing system
US20220397675A1 (en) Imaging systems, devices and methods
WO2019163315A1 (en) Information processing device, imaging device, and imaging system
US20240071122A1 (en) Object recognition method and time-of-flight object recognition circuitry
WO2024115493A1 (en) Electronic device and method
CN113614782A (en) Information processing apparatus, information processing method, and program
US20230341558A1 (en) Distance measurement system
US20230316546A1 (en) Camera-radar fusion using correspondences
US20230075409A1 (en) Methods and systems for deterministic calculation of surface normal vectors for sparse point clouds
US11698270B2 (en) Method, system, and computer program product for iterative warping of maps for autonomous vehicles and simulators
US20220290996A1 (en) Information processing device, information processing method, information processing system, and program
US20220148283A1 (en) Information processing apparatus, information processing method, and program