WO2023214271A1 - Method for assistance in driving a vehicle, corresponding electronic control unit, vehicle, and computer program product - Google Patents

Method for assistance in driving a vehicle, corresponding electronic control unit, vehicle, and computer program product Download PDF

Info

Publication number
WO2023214271A1
WO2023214271A1 PCT/IB2023/054434 IB2023054434W WO2023214271A1 WO 2023214271 A1 WO2023214271 A1 WO 2023214271A1 IB 2023054434 W IB2023054434 W IB 2023054434W WO 2023214271 A1 WO2023214271 A1 WO 2023214271A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
data
detected
environment surrounding
control unit
Prior art date
Application number
PCT/IB2023/054434
Other languages
French (fr)
Inventor
Marco Andreetto
Silvano Marenco
Marco Darin
Original Assignee
C.R.F. Società Consortile Per Azioni
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by C.R.F. Società Consortile Per Azioni filed Critical C.R.F. Società Consortile Per Azioni
Publication of WO2023214271A1 publication Critical patent/WO2023214271A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/10
    • B60K35/22
    • B60K35/28
    • B60K35/29
    • B60K2360/1438
    • B60K2360/148
    • B60K2360/175
    • B60K2360/176
    • B60K2360/177
    • B60K2360/191
    • B60K2360/21
    • B60K2360/48

Definitions

  • the present invention relates in general to driving-assistance systems for motor vehicles.
  • the invention regards a drivingassistance system that can be used both in a conventional motor vehicle and in an autonomous-driving motor vehicle, of the type comprising one or more sensors configured to detect the environment surrounding the vehicle and an electronic control unit configured to detect objects forming part of the environment surrounding the vehicle.
  • drivingassistance systems for motor vehicles based on vision systems (for example, photographic or video cameras), sensor systems (for example, comprising radar sensors or LiDAR sensors), automotive data networks, and vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) wireless communication systems.
  • vision systems for example, photographic or video cameras
  • sensor systems for example, comprising radar sensors or LiDAR sensors
  • automotive data networks for example, vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) wireless communication systems.
  • V2V vehicle-to-vehicle
  • V2I vehicle-to-infrastructure
  • Such driving-assistance systems include data-processing algorithms for processing the data collected by the sensors of the vehicle, for example for processing the image data collected by a vision system and/or the distance data collected by one or more distance sensors, such as ultrasonic sensors, LiDAR sensors, radar sensors.
  • known data-processing algorithms may not guarantee high precision (for example, proper identification of obstacles and/or impediments along the path of the vehicle) in every context of use.
  • such algorithms may be misled by the considerable variability of the conditions in which the vehicle is used, thus rendering problematical the creation of an autonomous or semi-autonomous driving system based only on the data collected by the sensors (for example, based only on image data), also for use in private or controlled areas.
  • the object of the present invention is to provide a method for assistance in driving a vehicle that makes use of the data detected by one or more sensors of the vehicle and that presents a higher accuracy as compared to known systems.
  • the subject of the invention is a method for assistance in driving a vehicle.
  • the method comprises the steps of: i) receiving, from one or more sensors of the vehicle configured to detect the environment surrounding the vehicle, a signal representing the environment surrounding the vehicle; ii) applying object-detection processing as a function of the aforesaid signal received to detect objects in the environment surrounding the vehicle; iii) in response to detection of an object in the environment surrounding the vehicle, imparting instructions to a user interface device of the vehicle to display on a display screen an image of the environment surrounding the vehicle, highlighting the object detected in the image; iv) imparting instructions to the user interface device to ask an occupant of the vehicle to associate to the object highlighted in the displayed image, via data-input means of the user interface device of the vehicle, data indicative of one or more features of the object highlighted; and v) storing the data indicative of one or more features of the detected object for a subsequent identification of the detected object.
  • the fundamental idea underlying the present invention is to increase the capacity of “perception” of the surrounding environment by vehicle by exploiting interaction with an occupant of the vehicle itself.
  • This is obtained in practice by capturing, by means of one or more sensors (for example, a camera, a radar sensor, a LiDAR sensor, or the like), a set of spatial data (for example, one or more images of the scene surrounding the vehicle in the case of use of a camera, or one or more point clouds in the case of use of a sensor of a radar or LiDAR type), detecting in the aforesaid spatial data, by means of an object-recognition algorithm, one or more objects that the vehicle can use as reference points for localization and/or autonomous or semi-autonomous navigation, and asking an occupant of the vehicle to associate to the aforesaid detected objects a set of information that may be useful for improving the precision of localization and/or autonomous or semi-autonomous navigation of the vehicle, especially in the case where the object
  • the occupant of the vehicle can interact with the vehicle to indicate that a given object detected in an image (or in a point cloud) represents a reference point that is invariable in the environment in which the vehicle is moving, so that the vehicle can be localized with greater precision in the environment and/or can carry out autonomously a certain manoeuvre when it subsequently recognizes the presence of that same object.
  • the occupant can interact with the vehicle to indicate that a given object detected as a function of the spatial data produced by the sensor of the vehicle represents an obstacle to be avoided so that the vehicle can avoid it autonomously when it subsequently recognizes the presence thereof.
  • the method comprises the step of imparting, following upon, and as a function of, a subsequent identification of the object in the environment surrounding the vehicle, instructions to one or more on-board systems of the vehicle to control the trajectory of movement of the vehicle as a function of the data indicative of one or more features of the detected object.
  • the data indicative of one or more features of the detected object comprise data that indicates whether the appearance of the object, as detected by the sensor of the vehicle, remains unchanged over time or else can change over time.
  • the method hence comprises the steps of:
  • the occupant of the vehicle can in a sense teach the vehicle to recognize and use, from among all the possible (visual and/or non-visual) reference points present in the environment in which the vehicle is moving, only those reference points that are invariable in time (for example, the opening corresponding to a window or a garage door in a wall).
  • the manoeuvre data and/or the geolocation data and/or the data of the spatial constraints stored by the vehicle for execution of a recurrent manoeuvre can hence be correlated to detection of an object that represents a “stable” or “invariable” reference point, thus improving the capacity of localization of the vehicle in the environment and consequently improving the precision of execution of the autonomous or semi-autonomous manoeuvres.
  • the data indicative of one or more features of the detected object comprise data that indicate whether the object is an obstacle to be avoided or not
  • the method comprises the step of imparting instructions to one or more on-board systems of the vehicle, following upon, and as a function of, a subsequent identification of an object associated to which is data that indicate that the object is an obstacle to be avoided, in order to control the trajectory of movement of the vehicle during execution in autonomous-driving mode of a recurrent low-speed manoeuvre.
  • the data indicative of one or more features of the detected object comprise data that indicate whether the object is a stationary object or else an object that may undergo movement
  • the method comprises the step of imparting instructions to one or more on-board systems of the vehicle, following upon, and as a function of, a subsequent identification of an object associated to which is data that indicate that the object is a stationary object, in order to control the trajectory of movement of the vehicle during execution in autonomous-driving mode of a recurrent low-speed manoeuvre.
  • the data indicative of one or more features of the detected object comprise data that indicate the type of the object (for example, a traffic light, a tree at the roadside, a postbox, or a bench located on a pavement, a pedestrian, another vehicle, etc.).
  • the method comprises the step of classifying the detected objects as objects the appearance of which is unchangeable over time or changeable over time (i.e., as reference points that are stable or not) as a function of the type assigned by the occupant of the vehicle: for example, a traffic light, a tree, a postbox, or a bench may be classified as stable reference points, whereas a pedestrian or another vehicle may be excluded from the set of the stable reference points.
  • the method comprises the steps of:
  • the senor of the vehicle comprises a camera configured to form an image of a scene surrounding the vehicle, and the method comprises:
  • the capacities of perception of the surrounding environment by the vehicle can be improved by involving an occupant of the vehicle, which mitigates the shortcomings of the object-detection algorithm implemented on the vehicle.
  • the subject of the invention is an electronic control unit for a vehicle, configured for being connected to one or more on-board systems of the vehicle, to one or more sensors of the vehicle (for example, a camera, a radar sensor, and/or a LiDAR sensor), and to a user interface device of the vehicle.
  • the sensor or sensors is/are configured to detect the environment surrounding the vehicle, and the user interface device comprises a display screen and means for input of data by an occupant of the vehicle.
  • the electronic control unit is configured to carry out the method according to one or more embodiments.
  • the subject of the invention is a vehicle comprising an electronic control unit according to one or more embodiments, one or more on-board systems of the vehicle connected to the electronic control unit, one or more sensors connected to the electronic control unit and configured to detect the environment surrounding the vehicle, and a user interface device connected to the electronic control unit and comprising a display screen and means for input of data by an occupant of the vehicle.
  • the subject of the invention is a computer program product that can be loaded into the memory of an electronic control unit for a motor vehicle and comprises instructions that, when the computer program product is run on the electronic control unit, result in execution of the method according to one or more embodiments by the electronic control unit.
  • FIG. 1 is a block diagram of an autonomous-driving system of a motor vehicle
  • FIG. 2 is a top plan view that shows a possible scenario of application of the present invention, with reference to an example in which a vehicle equipped with camera approaches a crossroads and detects the presence of one or more objects;
  • FIG. 3 is a view exemplifying a possible image captured by the camera of the vehicle in the situation illustrated in Figure 1 , where the presence of some objects is detected;
  • FIGS. 4 and 5 are views exemplifying possible images captured by the camera of a vehicle that learns a parking manoeuvre, where there is detected presence of some objects that could be used as reference points for a localization algorithm.
  • one or more embodiments find application in the field of autonomous-driving or assisted-driving vehicles equipped with one or more sensors such as preferably a camera, where the trajectory of movement of the vehicle and/or its capacity of localization in the environment are controlled by an electronic control unit as a function of the data obtained from said one or more sensors, for example as a function of the data obtained from one or more images captured by the camera.
  • one or more sensors such as preferably a camera
  • Figure 1 presents a block diagram of a motor vehicle autonomous-driving system, designated as a whole by the reference number 1 , designed to get a motor vehicle, designated by the reference number 2, to perform manoeuvres in autonomous-driving mode or semi-autonomous-driving mode.
  • the motor vehicle autonomous-driving system 1 comprises:
  • - motor vehicle on-board systems 3 which comprise, for example, a propulsion system, a braking system, a steering system, an infotainment system, and a sensor system designed to detect quantities regarding the motor vehicle 2, such as wheel angle, steering-wheel angle, yaw, longitudinal and lateral acceleration, position, etc.;
  • motor vehicle HMI Human-Machine Interface
  • an electronic control unit (ECU) 5 operatively connected to the motor vehicle on-board systems 3 and to the motor vehicle HMI 4 through an automotive on-board communication network 6, for example a CAN, a FlexRay, or the like.
  • ECU electronice control unit
  • Figures 2 and 3 represent an example of a possible application scenario, where the vehicle 2 is located in the proximity of a crossroads and is capturing a frontal image F of the scene in front of the vehicle. For instance, in the image F there may be displayed a first traffic light 7, a second traffic light 8, and another vehicle 9 that proceeds in the direction opposite to that of the vehicle 2.
  • the control unit 5 of the vehicle 2 receives, from its own camera, a signal representing the image F and applies object-detection processing to the image F, as exemplified in Figure 3, in which the traffic lights 7 and 8 and the vehicle 9 are enclosed in a dashed box to indicate that the recognition algorithm implemented on the vehicle 2 has recognized the presence thereof in the image F.
  • the control unit 5 of the vehicle 2 When an object is detected in the image F, the control unit 5 of the vehicle 2 imparts instructions to the user interface device 4 of the vehicle 2 to display on a screen the image F, highlighting the object (or a plurality of objects, as exemplified in Figure 3) detected in the image F.
  • the interface device 4 may comprise a conventional screen (possibly, of the touchscreen type) forming part of the infotainment system of the vehicle.
  • the interface device 4 may comprise a head-up display projected on the windscreen of the vehicle, in such a way that it is not necessary to reproduce integrally the contents of the image F, but it is sufficient to project on the windscreen a set of visual elements that highlight the detected objects 7, 8, and 9 (for example, it is sufficient to project the dashed boxes of Figure 3 or colored shadings in areas corresponding to the detected objects).
  • the screen of the user interface device 4 may be configured to display the data collected by the sensor in other forms, for example showing a point cloud that represents the profile of distance of the objects in front of the vehicle 2. Even though these data do not exactly reproduce the scene that is presented to the user of the vehicle, the user is in any case able to detect therein the presence of one or more objects based on what the user sees.
  • the control unit 5 When an occupant of the vehicle 2 (for example, the driver or a passenger) is provided with a graphic display of the objects detected in the image F, the control unit 5 imparts instructions to the interface device 4 so that the occupant of the vehicle 2 is asked to associate to the object highlighted in the image F data indicating one or more features of the highlighted object.
  • This interaction of the occupant with the vehicle 2 is obtained via data-input means of the interface device 4, which may, for example, comprise a touch-sensitive screen or, in the case of a head-up display, a device for voice recognition and voice interaction with the vehicle 2.
  • the invention envisages exploiting the interaction between the occupant and the vehicle 2 to improve the capacities of perception of the surrounding environment by the vehicle 2, thus compensating for, or mitigating, any possible shortcomings of the objectrecognition system implemented in the vehicle. Consequently, according to the invention, the control unit 5 of the vehicle 2 stores data that enable subsequent identification of the detected object and stores the data entered by the occupant of the vehicle, which indicate one or more features of the object. These stored features may vary in different embodiments of the invention, but they are in general features useful for controlling the localization and/or the trajectory of the vehicle 2 when the object is detected by the vehicle.
  • control unit 5 can impart instructions on the on-board systems 3 of the vehicle 2 to manage localization thereof and/or to control the trajectory of movement thereof as a function of detection of the “known” object and as a function of the data associated thereto.
  • the invention described herein is particularly advantageous in the case where the vehicle 2 is configured to carry out, in autonomous-driving or assisted-driving mode, one or more complex, recurrent, and low-speed manoeuvres, as described in document EP 3 586 211 B1 cited previously.
  • the control unit 5 of the vehicle 2 is configured for: identifying what are the complex, recurrent, and low-speed manoeuvres; locating the vehicle 2 within the environment in which such recurrent manoeuvres are carried out; and repeating said manoeuvres in automaticdriving mode.
  • localization of the vehicle 2 in the environment can be carried out using algorithms of a SLAM (Simultaneous Localization And Mapping) type, in themselves known, which enable the vehicle to create and store a virtual representation of the environment, i.e., a sort of map of an area that may possibly not be covered by digital road maps.
  • SLAM Simultaneous Localization And Mapping
  • the SLAM algorithms seek significant reference points in the surrounding environment to be used as landmarks for localization of the vehicle 2 within the map.
  • these reference points are objects or elements that can be readily distinguished from the background of the image, but at times these objects may be variable or “volatile”, i.e., their appearance (as detected by the vehicle) may vary over time, or else the objects may disappear altogether from the environment. Consequently, a SLAM algorithm implemented by the vehicle 2 may be misled and may fail in performing proper and precise localization of the vehicle if it makes use of these variable references.
  • the occupant of the vehicle may be asked to indicate whether the objects detected in the data produced by one or more sensors of the vehicle 2 (for example, in the images F captured by a camera of the vehicle 2) are invariable or variable, i.e., if their appearance remains unchanged over time or else may undergo change over time, and the localization and/or trajectory of the vehicle 2 may be controlled by the control unit 5 using as reference points for a SLAM algorithm only those objects that have been indicated by the occupant of the vehicle as being invariable.
  • control unit 5 may be configured for carrying out the steps of recognizing objects and “learning” their features on the basis of the feedback of the user during a first manual execution of a manoeuvre or during a plurality of manual executions of the manoeuvre, thus storing a set of manoeuvre data that as a whole describe the environment in which the manoeuvre has to be repeated in autonomous-driving mode.
  • manoeuvre data may comprise, for example, geolocation data (such as GPS data) of the starting and arrival points of the manoeuvre, as well as data regarding the spatial constraints (for example, the obstacles and the corresponding encumbrances) present in the environment in which the manoeuvre has to be repeated in autonomous-driving mode, after the manoeuvre has been stored.
  • control unit 5 of the vehicle 2 when the control unit 5 of the vehicle 2, during execution of the manoeuvre in autonomous-driving mode, again recognizes the presence of a known object that has been indicated by the user as invariable over time, it may impart instructions to the on-board systems 3 of the vehicle 2 to control the trajectory of movement of the vehicle during the manoeuvre using the aforesaid known object as positioning reference for a SLAM algorithm.
  • Figures 4 and 5 illustrate a potential context of application of the invention, in which the vehicle 2 learns execution of a recurrent low-speed manoeuvre, such as entry into a private garage.
  • Figure 4 illustrates how the vehicle 2 is able to form an image F’ when it is located in a manoeuvring lane within the garage space.
  • the control unit 5 of the vehicle 2 is configured for identifying, in the image F’, some objects that could be used as reference points for a SLAM algorithm, such as the profile of a window 10 and the profile of a garage door 11 , which in this case is half open.
  • the control unit 5 is not, however, able to establish autonomously which of these objects constitute invariable and reliable reference points and which, instead, constitute variable and consequently unreliable reference points.
  • the control unit 5 of the vehicle 2 imparts commands to the user interface 4 to display on a screen the image F’ where the window 10 and the garage door 11 are highlighted (for example, highlighted with grey filling in the figures annexed hereto) in such a way that the user can indicate the profile of the window 10 as invariable reference point and the profile of the garage door 11 as variable reference point (in so far as the garage door can assume various different positions between a position of complete opening and a position of complete closing).
  • the control unit 5 is configured for storing this information and associating it to the manoeuvre data learnt during manual execution of the manoeuvre in such a way as to use only the window 10 and not the garage door 11 as spatial reference elements during execution of a localization algorithm.
  • a situation similar to the one described with reference to Figure 4 may repeat when the vehicle 2 enters an individual garage of the garage space, as illustrated in Figure 5.
  • the vehicle 2 forms an image F” when it is at the entrance to the garage.
  • the control unit 5 identifies, also in the image F”, some objects that could be used as reference points for a SLAM algorithm, such as the profile of a cabinet 12 fixed to an end wall of the garage and the profile of a tire 13 resting on the garage floor.
  • control unit 5 imparts commands to the user interface 4 to display on a screen the image F” in which the cabinet 12 and the tire 13 are highlighted, the user indicates the cabinet 12 as invariable reference point and the tire 13 as variable reference point, and the control unit 5 stores this information and associates it to the manoeuvre data learnt during manual execution of the manoeuvre. Subsequently, only the cabinet 12, and not the tire 13, will be used as spatial reference element during execution of a localization algorithm inside the garage.
  • the information that the user can associate to one or more objects detected by the vehicle 2 regards whether the object detected in the image is an obstacle to be avoided or not.
  • the control unit 5 can control the vehicle 2 to carry out a manoeuvre, the trajectory of which avoids the objects that have been classified as obstacles.
  • the information that the user can associate to one or more objects detected by the vehicle 2 regards whether the object detected in the image is stationary or may undergo movement.
  • the control unit 5 can control the vehicle 2 to carry out a manoeuvre the trajectory of which is established using as references only the objects classified as stationary.
  • the information that the user can associate to one or more objects detected by the vehicle 2 regards the type of the object (for example, a traffic light, a tree at the roadside, a postbox or a bench located on a pavement, a pedestrian, another vehicle, etc.).
  • the control unit 5 can be configured for classifying the detected objects as invariable or variable reference points according to the type assigned by the user: for example, a traffic light, a tree, a postbox, or a bench can be catalogued as invariable and hence reliable reference points, whereas a pedestrian or another vehicle can be excluded from the set of reliable reference points.
  • the method according to the present invention may be applied to images captured not only by a front camera of the vehicle, but in general to any image of a scene surrounding the vehicle, for example a rear image captured by a rear camera or a lateral image captured by a lateral camera (for example, by means of 360-degree vision systems known per se).
  • the method may be applied also to data other than image data, such as the data produced by a sensor of a radar or LiDAR type.
  • the method according to the invention may advantageously be used also to make up for possible shortcomings of the object-detection algorithm implemented by the control unit.
  • the object-detection algorithm might not be able to identify any object in the images captured by the camera.
  • the method does not only comprise the step of proposing to the user (by means of the vehicle interface) an image where one or more objects to which given features are to be associated are already identified, but comprises the steps of proposing to the user display of the raw data collected by the sensor (for example, an image or a point cloud), and receiving an input from the user that identifies within said raw data the presence of one or more objects and associates thereto the features useful for use of these objects as reference points for localization and/or control of the trajectory of the vehicle.
  • the method according to the invention enables increase in the capacity of perception of the vehicle by detecting the presence of an object (for example, an obstacle or a stable reference point) that otherwise the autonomous-driving system would not have considered.

Abstract

A method for assistance in driving a vehicle comprises: i) receiving, from one or more sensors of the vehicle that are configured to detect the environment surrounding the vehicle, a signal representing the environment surrounding the vehicle; ii) applying object-detection processing to detect objects forming part of the environment surrounding the vehicle as a function of the signal received; iii) in response to detection of an object (7, 8, 9) in the environment surrounding the vehicle, imparting instructions to a user interface device of the vehicle to display on a display screen an image (F) of the environment surrounding the vehicle, highlighting the object detected in the image (7, 8, 9); iv) imparting instructions to the user interface device to ask an occupant of the vehicle to associate to the object (7, 8, 9) highlighted in the displayed image (F), via data-input means of the user interface device of the vehicle, data indicative of one or more features of the highlighted object (7, 8, 9); and v) storing the data indicative of one or more features of the detected object (7, 8, 9) for a subsequent identification of the detected object.

Description

“Method for assistance in driving a vehicle, corresponding electronic control unit, vehicle, and computer program product”
****
TEXT OF THE DESCRIPTION
Field of the invention
The present invention relates in general to driving-assistance systems for motor vehicles. In particular, the invention regards a drivingassistance system that can be used both in a conventional motor vehicle and in an autonomous-driving motor vehicle, of the type comprising one or more sensors configured to detect the environment surrounding the vehicle and an electronic control unit configured to detect objects forming part of the environment surrounding the vehicle.
Prior art
Known in the art, for example from documents EP 2 136 275 B1 and EP 3 586 211 B1 assigned to the present applicant, are drivingassistance systems for motor vehicles based on vision systems (for example, photographic or video cameras), sensor systems (for example, comprising radar sensors or LiDAR sensors), automotive data networks, and vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) wireless communication systems. In particular, systems based on the data produced by processing of the images detected by one or more cameras installed on the vehicle advantageously prove to be less expensive than systems that adopt a more complex and sophisticated sensor system.
Such driving-assistance systems include data-processing algorithms for processing the data collected by the sensors of the vehicle, for example for processing the image data collected by a vision system and/or the distance data collected by one or more distance sensors, such as ultrasonic sensors, LiDAR sensors, radar sensors. However, known data-processing algorithms, however advanced they may be, may not guarantee high precision (for example, proper identification of obstacles and/or impediments along the path of the vehicle) in every context of use. In particular, such algorithms may be misled by the considerable variability of the conditions in which the vehicle is used, thus rendering problematical the creation of an autonomous or semi-autonomous driving system based only on the data collected by the sensors (for example, based only on image data), also for use in private or controlled areas.
Object of the invention
The object of the present invention is to provide a method for assistance in driving a vehicle that makes use of the data detected by one or more sensors of the vehicle and that presents a higher accuracy as compared to known systems.
Summary of the invention
According to a first aspect, the subject of the invention is a method for assistance in driving a vehicle. The method comprises the steps of: i) receiving, from one or more sensors of the vehicle configured to detect the environment surrounding the vehicle, a signal representing the environment surrounding the vehicle; ii) applying object-detection processing as a function of the aforesaid signal received to detect objects in the environment surrounding the vehicle; iii) in response to detection of an object in the environment surrounding the vehicle, imparting instructions to a user interface device of the vehicle to display on a display screen an image of the environment surrounding the vehicle, highlighting the object detected in the image; iv) imparting instructions to the user interface device to ask an occupant of the vehicle to associate to the object highlighted in the displayed image, via data-input means of the user interface device of the vehicle, data indicative of one or more features of the object highlighted; and v) storing the data indicative of one or more features of the detected object for a subsequent identification of the detected object.
As will emerge in greater detail from the ensuing description, the fundamental idea underlying the present invention is to increase the capacity of “perception” of the surrounding environment by vehicle by exploiting interaction with an occupant of the vehicle itself. This is obtained in practice by capturing, by means of one or more sensors (for example, a camera, a radar sensor, a LiDAR sensor, or the like), a set of spatial data (for example, one or more images of the scene surrounding the vehicle in the case of use of a camera, or one or more point clouds in the case of use of a sensor of a radar or LiDAR type), detecting in the aforesaid spatial data, by means of an object-recognition algorithm, one or more objects that the vehicle can use as reference points for localization and/or autonomous or semi-autonomous navigation, and asking an occupant of the vehicle to associate to the aforesaid detected objects a set of information that may be useful for improving the precision of localization and/or autonomous or semi-autonomous navigation of the vehicle, especially in the case where the object-recognition algorithm executed by the control unit of the vehicle is not able to infer this information from the spatial data themselves. For instance, the occupant of the vehicle (the user) can interact with the vehicle to indicate that a given object detected in an image (or in a point cloud) represents a reference point that is invariable in the environment in which the vehicle is moving, so that the vehicle can be localized with greater precision in the environment and/or can carry out autonomously a certain manoeuvre when it subsequently recognizes the presence of that same object. Alternatively, the occupant can interact with the vehicle to indicate that a given object detected as a function of the spatial data produced by the sensor of the vehicle represents an obstacle to be avoided so that the vehicle can avoid it autonomously when it subsequently recognizes the presence thereof.
In a preferred embodiment, the method comprises the step of imparting, following upon, and as a function of, a subsequent identification of the object in the environment surrounding the vehicle, instructions to one or more on-board systems of the vehicle to control the trajectory of movement of the vehicle as a function of the data indicative of one or more features of the detected object.
In a preferred embodiment, the data indicative of one or more features of the detected object comprise data that indicates whether the appearance of the object, as detected by the sensor of the vehicle, remains unchanged over time or else can change over time. The method hence comprises the steps of:
- carrying out steps i) to v) during a first manual execution of a recurrent low-speed manoeuvre of the vehicle or during a plurality of manual executions of the recurrent low-speed manoeuvre of the vehicle, thus storing manoeuvre data that describe the environments in which the recurrent low-speed manoeuvres have to be repeated during autonomous driving and comprise data regarding the spatial constraints (for example, obstacles and corresponding encumbrances) present in the environments in which the recurrent low-speed manoeuvres have to be repeated in autonomous-driving mode when these manoeuvres have been stored, and optionally geolocation data regarding the starting and arrival points of the recurrent low-speed manoeuvres; and
- following upon, and as a function of, a subsequent identification of an object associated to which is data that indicates that the appearance of the detected object remains unchanged over time in the environment surrounding the vehicle, imparting instructions to one or more on-board systems of the vehicle to control the trajectory of movement of the vehicle during execution in autonomous-driving mode of a recurrent low-speed manoeuvre.
Hence, advantageously, the occupant of the vehicle can in a sense teach the vehicle to recognize and use, from among all the possible (visual and/or non-visual) reference points present in the environment in which the vehicle is moving, only those reference points that are invariable in time (for example, the opening corresponding to a window or a garage door in a wall). The manoeuvre data and/or the geolocation data and/or the data of the spatial constraints stored by the vehicle for execution of a recurrent manoeuvre can hence be correlated to detection of an object that represents a “stable” or “invariable” reference point, thus improving the capacity of localization of the vehicle in the environment and consequently improving the precision of execution of the autonomous or semi-autonomous manoeuvres.
According to a further preferred characteristic, the data indicative of one or more features of the detected object comprise data that indicate whether the object is an obstacle to be avoided or not, and the method comprises the step of imparting instructions to one or more on-board systems of the vehicle, following upon, and as a function of, a subsequent identification of an object associated to which is data that indicate that the object is an obstacle to be avoided, in order to control the trajectory of movement of the vehicle during execution in autonomous-driving mode of a recurrent low-speed manoeuvre.
According to a further preferred characteristic, the data indicative of one or more features of the detected object comprise data that indicate whether the object is a stationary object or else an object that may undergo movement, and the method comprises the step of imparting instructions to one or more on-board systems of the vehicle, following upon, and as a function of, a subsequent identification of an object associated to which is data that indicate that the object is a stationary object, in order to control the trajectory of movement of the vehicle during execution in autonomous-driving mode of a recurrent low-speed manoeuvre.
According to a further preferred characteristic, the data indicative of one or more features of the detected object comprise data that indicate the type of the object (for example, a traffic light, a tree at the roadside, a postbox, or a bench located on a pavement, a pedestrian, another vehicle, etc.). Optionally, the method comprises the step of classifying the detected objects as objects the appearance of which is unchangeable over time or changeable over time (i.e., as reference points that are stable or not) as a function of the type assigned by the occupant of the vehicle: for example, a traffic light, a tree, a postbox, or a bench may be classified as stable reference points, whereas a pedestrian or another vehicle may be excluded from the set of the stable reference points.
According to a further preferred characteristic, the method comprises the steps of:
- imparting instructions to the user interface device of the vehicle to display on the display screen, before applying object-detection processing, an image of the environment surrounding the vehicle as a function of the signal representing the environment surrounding the vehicle;
- imparting instructions to the user interface device to ask an occupant of the vehicle to detect one or more objects in the image of the environment surrounding the vehicle and to associate to the aforesaid objects detected by the occupant, via the data-input means of the user interface device of the vehicle, data indicative of one or more features of the objects detected by the occupant; and
- storing the data indicative of one or more features of the objects detected by the occupant for a subsequent identification of the objects detected by the occupant.
According to a further preferred characteristic, the sensor of the vehicle comprises a camera configured to form an image of a scene surrounding the vehicle, and the method comprises:
- receiving a signal representing the image of the scene surrounding the vehicle;
- applying object-detection processing to the image of the scene surrounding the vehicle; and
- storing image data for a subsequent identification of the detected object.
Consequently, also according to the above preferred characteristics, the capacities of perception of the surrounding environment by the vehicle can be improved by involving an occupant of the vehicle, which mitigates the shortcomings of the object-detection algorithm implemented on the vehicle.
According to another aspect, the subject of the invention is an electronic control unit for a vehicle, configured for being connected to one or more on-board systems of the vehicle, to one or more sensors of the vehicle (for example, a camera, a radar sensor, and/or a LiDAR sensor), and to a user interface device of the vehicle. The sensor or sensors is/are configured to detect the environment surrounding the vehicle, and the user interface device comprises a display screen and means for input of data by an occupant of the vehicle. The electronic control unit is configured to carry out the method according to one or more embodiments.
According to a further aspect, the subject of the invention is a vehicle comprising an electronic control unit according to one or more embodiments, one or more on-board systems of the vehicle connected to the electronic control unit, one or more sensors connected to the electronic control unit and configured to detect the environment surrounding the vehicle, and a user interface device connected to the electronic control unit and comprising a display screen and means for input of data by an occupant of the vehicle. According to yet a further aspect, the subject of the invention is a computer program product that can be loaded into the memory of an electronic control unit for a motor vehicle and comprises instructions that, when the computer program product is run on the electronic control unit, result in execution of the method according to one or more embodiments by the electronic control unit.
Detailed description of the invention
Further characteristics and advantages of the invention will emerge from the ensuing description with reference to the annexed drawings, which are provided purely by way of non-limiting example and in which:
- Figure 1 is a block diagram of an autonomous-driving system of a motor vehicle;
- Figure 2 is a top plan view that shows a possible scenario of application of the present invention, with reference to an example in which a vehicle equipped with camera approaches a crossroads and detects the presence of one or more objects;
- Figure 3 is a view exemplifying a possible image captured by the camera of the vehicle in the situation illustrated in Figure 1 , where the presence of some objects is detected; and
- Figures 4 and 5 are views exemplifying possible images captured by the camera of a vehicle that learns a parking manoeuvre, where there is detected presence of some objects that could be used as reference points for a localization algorithm.
In the figures annexed hereto, corresponding parts are designated by the same reference numbers.
As anticipated, one or more embodiments find application in the field of autonomous-driving or assisted-driving vehicles equipped with one or more sensors such as preferably a camera, where the trajectory of movement of the vehicle and/or its capacity of localization in the environment are controlled by an electronic control unit as a function of the data obtained from said one or more sensors, for example as a function of the data obtained from one or more images captured by the camera.
In the present description, reference will be made mainly to various embodiments in which the vehicle is equipped with a camera for perception of the surrounding environment. Of course, the same principles of the invention described herein may be applied also to other types of sensors, provided that these are able to supply a spatial representation of the environment surrounding the vehicle (for example, by producing a point cloud via a time-of-flight sensor, such as a radar or a LiDAR), in which an occupant of the vehicle can identify the presence of one or more objects.
In the above context, Figure 1 presents a block diagram of a motor vehicle autonomous-driving system, designated as a whole by the reference number 1 , designed to get a motor vehicle, designated by the reference number 2, to perform manoeuvres in autonomous-driving mode or semi-autonomous-driving mode. As illustrated in Figure 1 , the motor vehicle autonomous-driving system 1 comprises:
- motor vehicle on-board systems 3, which comprise, for example, a propulsion system, a braking system, a steering system, an infotainment system, and a sensor system designed to detect quantities regarding the motor vehicle 2, such as wheel angle, steering-wheel angle, yaw, longitudinal and lateral acceleration, position, etc.;
- a motor vehicle HMI (Human-Machine Interface) 4, through which the occupants of the motor vehicle 2 can interact with the motor vehicle autonomous-driving system 1 , and
- an electronic control unit (ECU) 5 operatively connected to the motor vehicle on-board systems 3 and to the motor vehicle HMI 4 through an automotive on-board communication network 6, for example a CAN, a FlexRay, or the like.
Figures 2 and 3 represent an example of a possible application scenario, where the vehicle 2 is located in the proximity of a crossroads and is capturing a frontal image F of the scene in front of the vehicle. For instance, in the image F there may be displayed a first traffic light 7, a second traffic light 8, and another vehicle 9 that proceeds in the direction opposite to that of the vehicle 2. In general, the control unit 5 of the vehicle 2 receives, from its own camera, a signal representing the image F and applies object-detection processing to the image F, as exemplified in Figure 3, in which the traffic lights 7 and 8 and the vehicle 9 are enclosed in a dashed box to indicate that the recognition algorithm implemented on the vehicle 2 has recognized the presence thereof in the image F. When an object is detected in the image F, the control unit 5 of the vehicle 2 imparts instructions to the user interface device 4 of the vehicle 2 to display on a screen the image F, highlighting the object (or a plurality of objects, as exemplified in Figure 3) detected in the image F. For instance, the interface device 4 may comprise a conventional screen (possibly, of the touchscreen type) forming part of the infotainment system of the vehicle. Alternatively, the interface device 4 may comprise a head-up display projected on the windscreen of the vehicle, in such a way that it is not necessary to reproduce integrally the contents of the image F, but it is sufficient to project on the windscreen a set of visual elements that highlight the detected objects 7, 8, and 9 (for example, it is sufficient to project the dashed boxes of Figure 3 or colored shadings in areas corresponding to the detected objects).
In the case where the sensor with which the vehicle 2 is equipped is not a camera but a sensor of a different type (for example, a radar or LiDAR), the screen of the user interface device 4 may be configured to display the data collected by the sensor in other forms, for example showing a point cloud that represents the profile of distance of the objects in front of the vehicle 2. Even though these data do not exactly reproduce the scene that is presented to the user of the vehicle, the user is in any case able to detect therein the presence of one or more objects based on what the user sees.
When an occupant of the vehicle 2 (for example, the driver or a passenger) is provided with a graphic display of the objects detected in the image F, the control unit 5 imparts instructions to the interface device 4 so that the occupant of the vehicle 2 is asked to associate to the object highlighted in the image F data indicating one or more features of the highlighted object. This interaction of the occupant with the vehicle 2 is obtained via data-input means of the interface device 4, which may, for example, comprise a touch-sensitive screen or, in the case of a head-up display, a device for voice recognition and voice interaction with the vehicle 2. Consequently, the invention envisages exploiting the interaction between the occupant and the vehicle 2 to improve the capacities of perception of the surrounding environment by the vehicle 2, thus compensating for, or mitigating, any possible shortcomings of the objectrecognition system implemented in the vehicle. Consequently, according to the invention, the control unit 5 of the vehicle 2 stores data that enable subsequent identification of the detected object and stores the data entered by the occupant of the vehicle, which indicate one or more features of the object. These stored features may vary in different embodiments of the invention, but they are in general features useful for controlling the localization and/or the trajectory of the vehicle 2 when the object is detected by the vehicle. Consequently, when the vehicle 2 subsequently (i.e., a second time) identifies an object that it has already detected in the past and for which the user has provided additional information (not detectable by the algorithm itself), the control unit 5 can impart instructions on the on-board systems 3 of the vehicle 2 to manage localization thereof and/or to control the trajectory of movement thereof as a function of detection of the “known” object and as a function of the data associated thereto.
The invention described herein is particularly advantageous in the case where the vehicle 2 is configured to carry out, in autonomous-driving or assisted-driving mode, one or more complex, recurrent, and low-speed manoeuvres, as described in document EP 3 586 211 B1 cited previously. In such cases, the control unit 5 of the vehicle 2 is configured for: identifying what are the complex, recurrent, and low-speed manoeuvres; locating the vehicle 2 within the environment in which such recurrent manoeuvres are carried out; and repeating said manoeuvres in automaticdriving mode. In particular, localization of the vehicle 2 in the environment (which may be a private or in any case a controlled area) can be carried out using algorithms of a SLAM (Simultaneous Localization And Mapping) type, in themselves known, which enable the vehicle to create and store a virtual representation of the environment, i.e., a sort of map of an area that may possibly not be covered by digital road maps. As is known in the art, the SLAM algorithms seek significant reference points in the surrounding environment to be used as landmarks for localization of the vehicle 2 within the map. Typically, these reference points are objects or elements that can be readily distinguished from the background of the image, but at times these objects may be variable or “volatile”, i.e., their appearance (as detected by the vehicle) may vary over time, or else the objects may disappear altogether from the environment. Consequently, a SLAM algorithm implemented by the vehicle 2 may be misled and may fail in performing proper and precise localization of the vehicle if it makes use of these variable references.
Consequently, in a preferred embodiment of the invention, the occupant of the vehicle may be asked to indicate whether the objects detected in the data produced by one or more sensors of the vehicle 2 (for example, in the images F captured by a camera of the vehicle 2) are invariable or variable, i.e., if their appearance remains unchanged over time or else may undergo change over time, and the localization and/or trajectory of the vehicle 2 may be controlled by the control unit 5 using as reference points for a SLAM algorithm only those objects that have been indicated by the occupant of the vehicle as being invariable. In particular, the control unit 5 may be configured for carrying out the steps of recognizing objects and “learning” their features on the basis of the feedback of the user during a first manual execution of a manoeuvre or during a plurality of manual executions of the manoeuvre, thus storing a set of manoeuvre data that as a whole describe the environment in which the manoeuvre has to be repeated in autonomous-driving mode. These manoeuvre data may comprise, for example, geolocation data (such as GPS data) of the starting and arrival points of the manoeuvre, as well as data regarding the spatial constraints (for example, the obstacles and the corresponding encumbrances) present in the environment in which the manoeuvre has to be repeated in autonomous-driving mode, after the manoeuvre has been stored. When the control unit 5 of the vehicle 2, during execution of the manoeuvre in autonomous-driving mode, again recognizes the presence of a known object that has been indicated by the user as invariable over time, it may impart instructions to the on-board systems 3 of the vehicle 2 to control the trajectory of movement of the vehicle during the manoeuvre using the aforesaid known object as positioning reference for a SLAM algorithm.
Figures 4 and 5 illustrate a potential context of application of the invention, in which the vehicle 2 learns execution of a recurrent low-speed manoeuvre, such as entry into a private garage. Figure 4 illustrates how the vehicle 2 is able to form an image F’ when it is located in a manoeuvring lane within the garage space. The control unit 5 of the vehicle 2 is configured for identifying, in the image F’, some objects that could be used as reference points for a SLAM algorithm, such as the profile of a window 10 and the profile of a garage door 11 , which in this case is half open. The control unit 5 is not, however, able to establish autonomously which of these objects constitute invariable and reliable reference points and which, instead, constitute variable and consequently unreliable reference points. Hence, according to the method of the invention, the control unit 5 of the vehicle 2 imparts commands to the user interface 4 to display on a screen the image F’ where the window 10 and the garage door 11 are highlighted (for example, highlighted with grey filling in the figures annexed hereto) in such a way that the user can indicate the profile of the window 10 as invariable reference point and the profile of the garage door 11 as variable reference point (in so far as the garage door can assume various different positions between a position of complete opening and a position of complete closing). The control unit 5 is configured for storing this information and associating it to the manoeuvre data learnt during manual execution of the manoeuvre in such a way as to use only the window 10 and not the garage door 11 as spatial reference elements during execution of a localization algorithm. A situation similar to the one described with reference to Figure 4 may repeat when the vehicle 2 enters an individual garage of the garage space, as illustrated in Figure 5. In this case, the vehicle 2 forms an image F” when it is at the entrance to the garage. The control unit 5 identifies, also in the image F”, some objects that could be used as reference points for a SLAM algorithm, such as the profile of a cabinet 12 fixed to an end wall of the garage and the profile of a tire 13 resting on the garage floor. Again, the control unit 5 imparts commands to the user interface 4 to display on a screen the image F” in which the cabinet 12 and the tire 13 are highlighted, the user indicates the cabinet 12 as invariable reference point and the tire 13 as variable reference point, and the control unit 5 stores this information and associates it to the manoeuvre data learnt during manual execution of the manoeuvre. Subsequently, only the cabinet 12, and not the tire 13, will be used as spatial reference element during execution of a localization algorithm inside the garage.
In other preferred embodiments, the information that the user can associate to one or more objects detected by the vehicle 2 regards whether the object detected in the image is an obstacle to be avoided or not. In this case, the control unit 5 can control the vehicle 2 to carry out a manoeuvre, the trajectory of which avoids the objects that have been classified as obstacles.
In yet further embodiments, the information that the user can associate to one or more objects detected by the vehicle 2 regards whether the object detected in the image is stationary or may undergo movement. In this case, the control unit 5 can control the vehicle 2 to carry out a manoeuvre the trajectory of which is established using as references only the objects classified as stationary.
According to a further preferred characteristic, the information that the user can associate to one or more objects detected by the vehicle 2 regards the type of the object (for example, a traffic light, a tree at the roadside, a postbox or a bench located on a pavement, a pedestrian, another vehicle, etc.). In this case, the control unit 5 can be configured for classifying the detected objects as invariable or variable reference points according to the type assigned by the user: for example, a traffic light, a tree, a postbox, or a bench can be catalogued as invariable and hence reliable reference points, whereas a pedestrian or another vehicle can be excluded from the set of reliable reference points.
Obviously, the method according to the present invention may be applied to images captured not only by a front camera of the vehicle, but in general to any image of a scene surrounding the vehicle, for example a rear image captured by a rear camera or a lateral image captured by a lateral camera (for example, by means of 360-degree vision systems known per se). As already anticipated, the method may be applied also to data other than image data, such as the data produced by a sensor of a radar or LiDAR type.
It will moreover be noted that, in one or more embodiments, the method according to the invention may advantageously be used also to make up for possible shortcomings of the object-detection algorithm implemented by the control unit. For instance, in some contexts, the object-detection algorithm might not be able to identify any object in the images captured by the camera. In such cases, the method does not only comprise the step of proposing to the user (by means of the vehicle interface) an image where one or more objects to which given features are to be associated are already identified, but comprises the steps of proposing to the user display of the raw data collected by the sensor (for example, an image or a point cloud), and receiving an input from the user that identifies within said raw data the presence of one or more objects and associates thereto the features useful for use of these objects as reference points for localization and/or control of the trajectory of the vehicle. In this way, the method according to the invention enables increase in the capacity of perception of the vehicle by detecting the presence of an object (for example, an obstacle or a stable reference point) that otherwise the autonomous-driving system would not have considered.
Of course, without prejudice to the principle of the invention, the details of construction and the embodiments may vary widely with respect to what has been described and illustrated herein purely by way of example, without thereby departing from the scope of the present invention, as defined in the annexed claims.

Claims

1. A method for assistance in driving a vehicle (2), the method comprising: i) receiving, from one or more sensors of the vehicle (2) configured to detect the environment surrounding the vehicle (2), a signal representing the environment surrounding the vehicle (2); ii) applying object-detection processing as a function of the aforesaid received signal for detecting objects in the environment surrounding the vehicle (2); iii) in response to detection of an object (7, 8, 9; 10, 11 ; 12, 13) in said environment surrounding the vehicle (2), imparting instructions to a user interface device (4) of the vehicle (2) to display on a display screen an image (F; F’; F”) of the environment surrounding the vehicle (2), highlighting said object (7, 8, 9; 10, 11 ; 12, 13) detected in the image; iv) imparting instructions to said user interface device (4) to ask an occupant of the vehicle (2) to associate to said object (7, 8, 9; 10, 11 ; 12, 13) highlighted in said displayed image (F; F’; F”), via data-input means of the user interface device (4) of the vehicle (2), data indicative of one or more features of said highlighted object (7, 8, 9; 10, 11 ; 12, 13); and v) storing said data indicative of one or more features of said detected object (7, 8, 9; 10, 11 ; 12, 13) for a subsequent identification of said detected object.
2. The method of claim 1 , comprising: vi) following upon, and as a function of, a subsequent identification of said object (7, 8; 10; 12) in the environment surrounding the vehicle (2), imparting instructions to one or more on-board systems (3) of the vehicle (2) to control the trajectory of movement of the vehicle (2) as a function of said data indicative of one or more features of said detected object (7, 8; 10; 12).
3. The method of claim 1 or claim 2, wherein said data indicative of one or more features of said detected object (7, 8, 9; 10, 11 ; 12, 13) comprise data that indicate whether the appearance of said object as detected by said one or more sensors remains unchanged over time or else may undergo change over time, the method comprising: - carrying out steps i) to v) during a first manual execution of a recurrent low-speed manoeuvre of the vehicle (2) or during a plurality of manual executions of said recurrent low-speed manoeuvre of the vehicle (2), storing manoeuvre data that describe the environments in which the recurrent low-speed manoeuvre has to be repeated in autonomous-driving mode and comprising data on the spatial constraints present in the environments in which the recurrent low-speed manoeuvre has to be repeated in autonomous-driving mode when the manoeuvre has been stored; and
- following upon, and as a function of, a subsequent identification of an object (7, 8; 10; 12), associated to which are data that indicate that the appearance of the detected object remains unchanged over time in the environment surrounding the vehicle, imparting instructions to one or more on-board systems (3) of the vehicle (2) to control the trajectory of movement of the vehicle (2) during execution of said recurrent low-speed manoeuvre in autonomous-driving mode.
4. The method of claim 3, wherein said data indicative of one or more features of said detected object (7, 8, 9; 10, 11 ; 12, 13) comprise data that indicate whether the object is an obstacle to be avoided or not, the method comprising:
- following upon, and as a function of, a subsequent identification of an object associated to which are data that indicate that the object is an obstacle to be avoided, imparting instructions to said one or more onboard systems (3) of the vehicle (2) to control the trajectory of movement of the vehicle (2) during execution of said recurrent low-speed manoeuvre in autonomous-driving mode.
5. The method of claim 3 or claim 4, wherein said data indicative of one or more features of said detected object (7, 8, 9; 10, 11 ; 12, 13) comprise data that indicate whether the object is a stationary object or else an object that may undergo movement, the method comprising:
- following upon, and as a function of, a subsequent identification of an object associated to which are data that indicate that the object is a stationary object, imparting instructions to said one or more on-board systems (3) of the vehicle (2) to control the trajectory of movement of the vehicle (2) during execution of said recurrent low-speed manoeuvre in autonomous-driving mode.
6. The method of any of the preceding claims, comprising:
- imparting instructions to said user interface device (4) of the vehicle (2) to display on said display screen, before applying said objectdetection processing, an image (F; F’; F”) of the environment surrounding the vehicle (2) as a function of said signal representing the environment surrounding the vehicle (2);
- imparting instructions to said user interface device (4) to ask an occupant of the vehicle (2) to detect one or more objects in said image (F; F’; F”) of the environment surrounding the vehicle (2) and to associate to said one or more objects detected by the occupant, via said data-input means of the user interface device (4) of the vehicle (2), data indicative of one or more features of said one or more objects detected by the occupant; and
- storing said data indicative of one or more features of said one or more objects detected by the occupant for a subsequent identification of said one or more objects detected by the occupant.
7. The method of any of the preceding claims, wherein said one or more sensors of the vehicle (2) comprise a camera configured to form an image (F; F’; F”) of a scene surrounding the vehicle, the method comprising:
- receiving a signal representing said image (F; F’; F”) of the scene surrounding the vehicle;
- applying said object-detection processing to said image (F; F’; F”) of the scene surrounding the vehicle; and
- storing image data for a subsequent identification of said detected object (7, 8, 9; 10, 11 ; 12, 13).
8. An electronic control unit (5) for a vehicle (2), configured for being connected to one or more on-board systems (3) of the vehicle (2), to one or more sensors of the vehicle, and to a user interface device (4) of the vehicle (2), wherein said one or more sensors are configured to detect the environment surrounding the vehicle (2), and said user interface device (4) comprises a display screen and means for input of data by an occupant of the vehicle, wherein the electronic control unit (5) is configured to implement the method according to any of the preceding claims.
9. A vehicle (2) comprising an electronic control unit (5) according to claim 8, one or more on-board systems (3) of the vehicle (2) connected to said electronic control unit (5), one or more sensors connected to said electronic control unit (5) and configured to detect the environment surrounding the vehicle (2), and a user interface device (4) connected to said electronic control unit (5) and comprising a display screen and means for input of data by an occupant of the vehicle (2).
10. A computer program product that can be loaded into the memory of an electronic control unit (5) for a vehicle (2) and comprises instructions that, when the computer program product is run on the electronic control unit (5), result in execution of the method of any of claims 1 to 7 by the electronic control unit (5).
PCT/IB2023/054434 2022-05-02 2023-04-28 Method for assistance in driving a vehicle, corresponding electronic control unit, vehicle, and computer program product WO2023214271A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102022000008807 2022-05-02
IT202200008807 2022-05-02

Publications (1)

Publication Number Publication Date
WO2023214271A1 true WO2023214271A1 (en) 2023-11-09

Family

ID=82482831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/054434 WO2023214271A1 (en) 2022-05-02 2023-04-28 Method for assistance in driving a vehicle, corresponding electronic control unit, vehicle, and computer program product

Country Status (1)

Country Link
WO (1) WO2023214271A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2136275B1 (en) 2008-06-18 2014-05-21 C.R.F. Società Consortile per Azioni Automatic driving system for automatically driving a motor vehicle along predefined paths
US20200209875A1 (en) * 2019-01-02 2020-07-02 Samsung Electronics Co., Ltd. System and method for training and operating an autonomous vehicle
EP3586211B1 (en) 2018-03-06 2020-07-15 C.R.F. Società Consortile per Azioni Automotive autonomous driving to perform complex recurrent low speed manoeuvres
US20210018916A1 (en) * 2019-07-18 2021-01-21 Nissan North America, Inc. System to Recommend Sensor View for Quick Situational Awareness
US20220026226A1 (en) * 2020-07-21 2022-01-27 Ag Leader Technology Visual Boundary Segmentations And Obstacle Mapping For Agricultural Vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2136275B1 (en) 2008-06-18 2014-05-21 C.R.F. Società Consortile per Azioni Automatic driving system for automatically driving a motor vehicle along predefined paths
EP3586211B1 (en) 2018-03-06 2020-07-15 C.R.F. Società Consortile per Azioni Automotive autonomous driving to perform complex recurrent low speed manoeuvres
US20200209875A1 (en) * 2019-01-02 2020-07-02 Samsung Electronics Co., Ltd. System and method for training and operating an autonomous vehicle
US20210018916A1 (en) * 2019-07-18 2021-01-21 Nissan North America, Inc. System to Recommend Sensor View for Quick Situational Awareness
US20220026226A1 (en) * 2020-07-21 2022-01-27 Ag Leader Technology Visual Boundary Segmentations And Obstacle Mapping For Agricultural Vehicles

Similar Documents

Publication Publication Date Title
US10481609B2 (en) Parking-lot-navigation system and method
US10528055B2 (en) Road sign recognition
US11703883B2 (en) Autonomous driving device
US11273821B2 (en) Parking assistance method and parking assistance device
US20180292834A1 (en) Trajectory setting device and trajectory setting method
US11747814B2 (en) Autonomous driving device
DE112019001657T5 (en) SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD, PROGRAM AND MOBILE BODY
CN109814130B (en) System and method for free space inference to separate clustered objects in a vehicle awareness system
US11829131B2 (en) Vehicle neural network enhancement
CN105684039B (en) Condition analysis for driver assistance systems
US11912295B2 (en) Travel information processing apparatus and processing method
US11104356B2 (en) Display device and method for a vehicle
US20220366175A1 (en) Long-range object detection, localization, tracking and classification for autonomous vehicles
CN112435460A (en) Method and system for traffic light status monitoring and traffic light to lane assignment
CN115520100A (en) Automobile electronic rearview mirror system and vehicle
CN112444258A (en) Method for judging drivable area, intelligent driving system and intelligent automobile
WO2023214271A1 (en) Method for assistance in driving a vehicle, corresponding electronic control unit, vehicle, and computer program product
JP2017208040A (en) Automatic operation control system for mobile entity
CN114771510A (en) Parking method, parking system and electronic device based on route map
US11590978B1 (en) Assessing perception of sensor using known mapped objects
CN113815627A (en) Method and system for determining a command of a vehicle occupant
CN115996869A (en) Information processing device, information processing method, information processing system, and program
US20230022104A1 (en) Object detection device
US11640173B2 (en) Control apparatus, control method, and computer-readable storage medium storing program
US20220250652A1 (en) Virtual lane methods and systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23725813

Country of ref document: EP

Kind code of ref document: A1