WO2018130689A1 - Method for capturing an environmental region of a motor vehicle with adaptation of a region of interest depending on a trailer, computing device, camera system as well as motor vehicle - Google Patents

Method for capturing an environmental region of a motor vehicle with adaptation of a region of interest depending on a trailer, computing device, camera system as well as motor vehicle Download PDF

Info

Publication number
WO2018130689A1
WO2018130689A1 PCT/EP2018/050886 EP2018050886W WO2018130689A1 WO 2018130689 A1 WO2018130689 A1 WO 2018130689A1 EP 2018050886 W EP2018050886 W EP 2018050886W WO 2018130689 A1 WO2018130689 A1 WO 2018130689A1
Authority
WO
WIPO (PCT)
Prior art keywords
trailer
motor vehicle
region
image
determined
Prior art date
Application number
PCT/EP2018/050886
Other languages
French (fr)
Inventor
Ehsan CHAH
Michael Starr
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2018130689A1 publication Critical patent/WO2018130689A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/70Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by an event-triggered choice to display a specific image among a selection of captured images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to a method for capturing an environmental region of a motor vehicle, in which at least one image captured by a camera is received, wherein the at least one image describes the environmental region, a region of interest is determined in the image and objects are captured within the region of interest.
  • the present invention relates to a computing device for a camera system.
  • the present invention relates to a camera system with such a computing device.
  • the present invention relates to a motor vehicle with such a camera system.
  • the interest is directed to methods, by which an environmental region of a motor vehicle can be captured with the aid of at least one camera.
  • images of the environmental region can be provided by the at least one camera.
  • objects can then be recognized in these images.
  • regions of interest are determined in the images and used for further processing.
  • areas, which are not of interest can for example not be taken into account.
  • these areas, which are not taken into account can for example describe the motor vehicle, the sky or the like.
  • memory demand can be saved and computing power can be increased.
  • the region of interest can be blocked by the trailer.
  • the region of interest can include areas in the image, which show components of the trailer, for example the drawbar or the coupling. This can result in the fact that a machine vision method or an object recognition algorithm is impaired.
  • this object is solved by a method, by a computing device, by a camera system as well as by a motor vehicle having the features according to the respective independent claims.
  • Advantageous developments of the invention are the subject matter of the dependent claims.
  • a method for capturing an environmental region of a motor vehicle at least one image captured by a camera is received.
  • the image in particular describes the environmental region.
  • a region of interest is determined in the image.
  • objects are preferably captured in the region of interest.
  • a trailer area is preferably determined in the at least one image, which in particular describes the trailer.
  • the region of interest is preferably determined based on the trailer area.
  • a method according to the invention serves for capturing an environmental region of a motor vehicle.
  • at least one image captured by a camera is received, wherein the at least one image describes the environmental region.
  • a region of interest is determined in the image and objects are captured within the region of interest. If it is recognized that a trailer is connected to the motor vehicle, a trailer area is determined in the at least one image, which describes the trailer. Furthermore, the region of interest is determined based on the trailer area.
  • the environmental region of the motor vehicle is to be captured with the aid of the method.
  • the method can be performed by a computing device of a camera system.
  • This computing device can be connected to at least one camera of the camera system for data transfer.
  • the at least one image captured by the camera and describing the environmental region can be transferred to the computing device.
  • multiple temporally consecutive images are captured by the camera and transferred to the computing device.
  • a region of interest Rol
  • This region of interest can be determined such that it includes the areas describing the roadway.
  • the region of interest can be determined such that it does not include areas, which describe the motor vehicle itself or the sky.
  • the region of interest in particular describes a section of the at least one image. Objects can then be recognized in this region of interest.
  • the trailer area is determined in the at least one image.
  • the image can in particular originate from a camera of the motor vehicle, which is disposed in a rear area of the motor vehicle or by means of which an environmental region behind the motor vehicle in direction of travel is captured.
  • the trailer area which is determined within the image, shows at least a part of the trailer or the area, which is blocked by the trailer.
  • the trailer area can be taken into account in determining the region of interest.
  • the trailer can for example be prevented from being taken into account by an object recognition algorithm or in a machine vision method.
  • the trailer or parts thereof can be prevented from being erroneously recognized as an object. Thereby, the false positive rate can be reduced. Overall, it can therefore be achieved that objects in the environmental region are also reliably recognized if the trailer is connected to the motor vehicle.
  • the region of interest is determined such that the region of interest is different from the trailer area.
  • the region of interest can exclude the trailer area.
  • an original region of interest is first determined, which is usually used if a trailer is not located at the motor vehicle.
  • the trailer area which is associated with the trailer, can be determined.
  • the parts of the original region of interest can then be selected, which are different from the trailer area.
  • the trailer area is first determined and subsequently thereto the region of interest is determined based on the trailer area.
  • the trailer or parts thereof can overall be reliably prevented from being taken into account in capturing the environmental region.
  • At least two temporally consecutive images are received, a plurality of image elements is respectively associated with the at least two images, a motion vector is determined for each of the image elements and the image elements are associated with the trailer area depending on their motion vector.
  • the temporally consecutive images are captured by the camera while the motor vehicle is moved.
  • the temporally consecutive images, which are received are respectively divided into multiple image elements.
  • each of the image elements includes at least one pixel of the image.
  • the corresponding image elements of the temporally consecutive images can be compared to each other. This allows determining a motion vector for each of the image elements.
  • the motion vector in particular describes a motion of the object, which is imaged by the image element, between the at least two temporally consecutive images.
  • the motion vector can for example be determined with the aid of an optical flow method. If a motion vector has been determined for each of the image elements, it can be decided whether or not the respective image elements are associated with the trailer area. This allows reliable determination of the trailer area.
  • those image elements are associated with the trailer area, the motion vectors of which describe a lower motion speed and/or another direction of motion compared to the remaining motion vectors.
  • the realization is taken into account that the motion vectors of those image elements, which describe the trailer, describe no or a very low motion. This is substantiated in that the trailer is moved together with the motor vehicle.
  • the image elements, which describe the trailer can be differentiated from other image elements, which for example describe objects in the environmental region, in simple manner.
  • image elements describing the roadway can for example be differentiated from image elements describing the trailer. It can also be examined if the motion vectors of the pixels have another direction of motion compared to the remaining motion vectors. Such outliers among the motion vectors can then also be associated with the trailer area.
  • temporally consecutive images are continuously received and the trailer area is continuously updated based on the images.
  • the images are captured during the motion of the motor vehicle.
  • the motion vector can then be determined for each of the image elements. This allows continuously updating or refining the trailer area.
  • the trailer area can be precisely determined in the images. In this manner, it also becomes possible that the trailer area can be determined for different trailers or trailer types, which can be attached to the motor vehicle.
  • a stay area is determined in the at least one image, which describes the trailer with a predetermined likelihood, and the trailer area is determined based on the stay area.
  • the image received by the computing device describes the environmental region of the motor vehicle.
  • the mounting position of the camera on the motor vehicle as well as the capturing range of the camera are known.
  • it can be determined, in which area of the image the trailer can be present.
  • it can for example be taken into account that the trailer is connected to the tow coupling, which is located centrally in the rear area of the motor vehicle.
  • the stay area in which the trailer is located with high likelihood, can be taken into account in determining the trailer area. This allows reliable determination of the trailer area.
  • a trailer angle between the motor vehicle and the trailer is continuously determined and the trailer area is determined depending on the trailer angle.
  • the trailer angle in particular describes an angle between the longitudinal axis of the motor vehicle and the longitudinal axis of the trailer.
  • the trailer angle can be determined by a corresponding sensor, which is disposed in the tow coupling of the motor vehicle or the drawbar of the trailer.
  • the trailer can be determined by a corresponding environmental sensor of the motor vehicle.
  • the motion of the motor vehicle and in particular a steering angle or rotary rate of the motor vehicle are continuously acquired. In this manner, the trailer angle can be estimated. If the trailer angle is known, it can be determined where the trailer is located relative to the motor vehicle. Thereby, it can also be determined where the picture of the trailer is located within the image. This can be taken into account in determining the trailer area. This allows precise determination of the trailer area.
  • a connection signal is received.
  • the computing device can for example receive this connection signal via a data bus of the motor vehicle.
  • This connection signal can be emitted if the trailer is electrically or mechanically connected to the motor vehicle. Thus, it can be recognized in reliable manner that the trailer is coupled to the motor vehicle.
  • the objects in the region of interest are recognized by means of an object recognition algorithm. It can also be provided that a machine vision method is used to recognize the objects in the region of interest. Thus, further traffic participants in the environment of the motor vehicle can for example be reliably recognized. In particular, it becomes possible to recognize pedestrians in the
  • a computing device for a camera system is adapted for performing a method according to the invention and the advantageous configurations thereof.
  • the computing device can for example be a programmable computer such as a digital signal processor, microcontroller or the like. Accordingly, a computer program can be provided, which is for example stored on a storage medium, wherein the program is programmed for executing the method described here when it is executed on the computing device.
  • a camera system according to the invention for a motor vehicle includes a computing device according to the invention as well as at least one camera. Images of the environmental region can be provided by the camera and transferred to the computing device.
  • the camera system includes a plurality of cameras. These cameras can then be disposed distributed at the motor vehicle.
  • a motor vehicle according to the invention includes a camera system according to the invention.
  • the motor vehicle can be formed as a passenger car or as a utility vehicle.
  • Fig. 1 a motor vehicle according to an embodiment of the invention, which
  • Fig. 2 an image, which is provided by a camera of the camera system, wherein the image is divided into a plurality of image elements, wherein a motion vector is associated with each of the image elements;
  • Fig. 3 a further image, which is provided by a camera, wherein a region of interest is defined in the image;
  • Fig. 4 the image of Fig. 3, in which a region of interest is defined according to an embodiment of the invention.
  • Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view.
  • the motor vehicle 1 is formed as a passenger car.
  • the motor vehicle 1 is connected to a trailer 9.
  • the motor vehicle 1 together with the trailer 9 constitutes a vehicle/trailer combination.
  • the motor vehicle 1 includes a camera system 2.
  • the camera system 2 in turn includes a computing device 3, which can for example be constituted by an electronic controller of the motor vehicle 1 .
  • the camera system 2 includes at least one camera 4.
  • the camera system 2 includes four cameras 4, which are disposed distributed at the motor vehicle 1 .
  • one of the cameras 4 is disposed in a rear area 5
  • one of the cameras 4 is disposed in a front area 7 of the motor vehicle 1 and the remaining two cameras 4 are disposed in a respective lateral area 6, in particular in an area of the wing mirrors.
  • the number and arrangement of the cameras 4 of the camera system 2 are to be purely exemplarily understood.
  • An environmental region 8 of the motor vehicle 1 can be captured by the cameras 4.
  • the four cameras 4 are formed identical in construction.
  • images 10 or an image sequence can be provided by the cameras 4, which describe the environmental region 8.
  • These images 10 can be transferred from the cameras 4 to the computing device 3.
  • a display device of the motor vehicle 1 not illustrated here can be controlled such that the images 10 of the cameras 4 can be displayed to the driver.
  • objects can be recognized by the computing device 3.
  • the camera system 2 serves for assisting the driver of the motor vehicle 1 in driving the motor vehicle 1 .
  • the camera system 2 can for example be a so-called electronic rearview mirror or a parking assistance system or another system.
  • Fig. 2 shows an image 10, which is provided by one of the cameras 4 of the camera system 2.
  • the image 10 is provided by the camera 4, which is located in the rear area 5 of the motor vehicle 1 .
  • the image 10 shows at least a part of the trailer 9.
  • a trailer area 1 1 is to be recognized within the image 10, which is associated with the trailer 9.
  • a plurality of image elements 12 is defined in the image 10.
  • the image elements 12 are rectangularly or squarely designed.
  • a motion vector 13 is determined.
  • temporally consecutive images 10 are received from the camera 4 by the computing device 3.
  • the image elements 12 of the temporally consecutive images 10 are compared to each other.
  • the respective motion vector 13 can then be determined.
  • image elements 12 are apparent, which can be associated with a first group 14. These image elements 12 of the first group 14 presently describe a roadway 18, on which the motor vehicle 1 and the trailer 9 move. The motion vectors 13 of these image elements 12 substantially describe the same motion speed and the same direction of motion. Furthermore, image elements 12 are present, which can be associated with a second group 15. The motion vectors 13 of these image elements 12 differ from the motion vectors 13 of the image elements 12 of the first group 14 in particular based on their direction of motion. In addition, image elements 12 can be associated with a third group 16.
  • the motion vectors 13 of these image elements 12 feature in that they describe a lower motion speed compared to the image elements 12 of the first group 14 and of the second group 15. Due to the association of the image elements 12 with the groups 14, 15, 16, the trailer area 1 1 can now be determined, which describes the trailer 9.
  • the image elements 12 of the first group 14 are associated with the roadway 18.
  • the image elements 12 of the second group 15 and of the third group 16 can first be associated with the trailer 9.
  • a stay area 17 is determined, which shows the trailer 9 in the image 10 with a high likelihood.
  • this stay area 17 describes the area, in which the trailer 9 is imaged in the image 10 with high likelihood.
  • the trailer 9 is connected to a tow coupling, which is located centrally in the rear area 5 of the motor vehicle 1 .
  • a trailer angle a is continuously determined.
  • the trailer angle a describes the angle between a longitudinal axis of the motor vehicle 1 and a longitudinal axis of the trailer 9.
  • a corresponding sensor can be used.
  • the steering angle of the motor vehicle 1 can also be continuously determined and the trailer angle a can be determined herefrom.
  • the trailer angle a it can be determined where the trailer 9 is located relative to the motor vehicle 1 .
  • the information about the stay area 17 and/or the trailer angle a can be used to correct or refine the association of the image elements 12 with the trailer area 1 1 . For example, it can thus be determined that only the image elements 12 of the third group 16 are to be associated with the trailer area 1 1 .
  • Fig. 3 shows an image 10, which is provided by the camera 4, according to a further embodiment.
  • a region of interest 19 is determined within the image 10. Objects can then for example be recognized within this region of interest 19 and/or machine vision methods can be applied.
  • the region of interest 19 is determined such that it encompasses the picture of the roadway 18.
  • the region of interest 19 excludes areas of the image 10, which describe the motor vehicle 1 .
  • the region of interest 19 also encompasses the trailer area 1 1 .
  • Fig. 4 shows the image 10 according to Fig. 3 in a further embodiment.
  • the trailer area 1 1 was taken into account in determining the region of interest 19.
  • the region of interest 19 is determined such that it is different from the trailer area 1 1 or excludes the trailer area 1 1 .
  • object recognition algorithms or machine vision methods can then be applied.
  • the trailer 9 can for example be prevented from being erroneously recognized as an object.
  • the information with respect to the determined region of interest 19 can also be conveyed to other algorithms or functional facilities, which use the images 10 of the camera 4.

Abstract

The invention relates to a method for capturing an environmental region (8) of a motor vehicle (1), in which at least one image (10) captured by a camera (4) is received, wherein the at least one image (10) describes the environmental region (8), a region of interest (19) is determined in the image (10) and objects are captured within the region of interest (19), wherein if it is recognized that a trailer (9) is connected to the motor vehicle (1), a trailer area (11) is determined in the at least one image (10), which describes the trailer (9), and the region of interest (19) is determined based on the trailer area (11).

Description

Method for capturing an environmental region of a motor vehicle with adaptation of a region of interest depending on a trailer, computing device, camera system as well as motor vehicle
The present invention relates to a method for capturing an environmental region of a motor vehicle, in which at least one image captured by a camera is received, wherein the at least one image describes the environmental region, a region of interest is determined in the image and objects are captured within the region of interest. In addition, the present invention relates to a computing device for a camera system. Moreover, the present invention relates to a camera system with such a computing device. Finally, the present invention relates to a motor vehicle with such a camera system.
Presently, the interest is directed to methods, by which an environmental region of a motor vehicle can be captured with the aid of at least one camera. Therein, images of the environmental region can be provided by the at least one camera. With the aid of corresponding object recognition algorithms or machine vision methods, objects can then be recognized in these images. Therein, it is further known from the prior art that regions of interest are determined in the images and used for further processing. Thus, areas, which are not of interest, can for example not be taken into account. According to application, these areas, which are not taken into account, can for example describe the motor vehicle, the sky or the like. Thus, memory demand can be saved and computing power can be increased.
If a trailer is connected to the motor vehicle, the region of interest, which has been determined in the image, can be blocked by the trailer. For example, the region of interest can include areas in the image, which show components of the trailer, for example the drawbar or the coupling. This can result in the fact that a machine vision method or an object recognition algorithm is impaired.
In context with methods for recognizing objects with the aid of ultrasonic sensors, it is known from the prior art that an ultrasonic sensor associated with a trailer device is not operated in transmitting operation, but only in receiving operation. Thus, a tow coupling for example can be prevented from being erroneously recognized as an object. Such a method is for example described in WO 2015/193060 A1 . It is the object of the present invention to demonstrate a solution, how an environmental region of a motor vehicle can be reliably captured with the aid of a camera even with a trailer connected to the motor vehicle.
According to the invention, this object is solved by a method, by a computing device, by a camera system as well as by a motor vehicle having the features according to the respective independent claims. Advantageous developments of the invention are the subject matter of the dependent claims.
According to an embodiment of a method for capturing an environmental region of a motor vehicle, at least one image captured by a camera is received. Therein, the image in particular describes the environmental region. In particular, a region of interest is determined in the image. Furthermore, objects are preferably captured in the region of interest. Therein, it is preferably provided that if it is recognized that a trailer is connected to the motor vehicle, a trailer area is preferably determined in the at least one image, which in particular describes the trailer. Furthermore, the region of interest is preferably determined based on the trailer area.
A method according to the invention serves for capturing an environmental region of a motor vehicle. Herein, at least one image captured by a camera is received, wherein the at least one image describes the environmental region. Furthermore, a region of interest is determined in the image and objects are captured within the region of interest. If it is recognized that a trailer is connected to the motor vehicle, a trailer area is determined in the at least one image, which describes the trailer. Furthermore, the region of interest is determined based on the trailer area.
The environmental region of the motor vehicle is to be captured with the aid of the method. The method can be performed by a computing device of a camera system. This computing device can be connected to at least one camera of the camera system for data transfer. Thus, the at least one image captured by the camera and describing the environmental region can be transferred to the computing device. Preferably, it is provided that multiple temporally consecutive images are captured by the camera and transferred to the computing device. With the aid of the computing device, a region of interest (Rol) can then be determined within the at least one image. This region of interest can be determined such that it includes the areas describing the roadway. In addition, the region of interest can be determined such that it does not include areas, which describe the motor vehicle itself or the sky. Therein, the region of interest in particular describes a section of the at least one image. Objects can then be recognized in this region of interest.
According to an essential aspect of the present invention, it is provided that if it is recognized that the trailer is connected to the motor vehicle, the trailer area is determined in the at least one image. Therein, the image can in particular originate from a camera of the motor vehicle, which is disposed in a rear area of the motor vehicle or by means of which an environmental region behind the motor vehicle in direction of travel is captured. The trailer area, which is determined within the image, shows at least a part of the trailer or the area, which is blocked by the trailer. After the trailer area has been defined within the image, it can be taken into account in determining the region of interest. Thus, the trailer can for example be prevented from being taken into account by an object recognition algorithm or in a machine vision method. Moreover, the trailer or parts thereof can be prevented from being erroneously recognized as an object. Thereby, the false positive rate can be reduced. Overall, it can therefore be achieved that objects in the environmental region are also reliably recognized if the trailer is connected to the motor vehicle.
Preferably, the region of interest is determined such that the region of interest is different from the trailer area. In other words, the region of interest can exclude the trailer area. Herein, it can for example be provided that an original region of interest is first determined, which is usually used if a trailer is not located at the motor vehicle. Subsequently thereto, the trailer area, which is associated with the trailer, can be determined. For determining the region of interest, the parts of the original region of interest can then be selected, which are different from the trailer area. It can also be provided that the trailer area is first determined and subsequently thereto the region of interest is determined based on the trailer area. Thus, the trailer or parts thereof can overall be reliably prevented from being taken into account in capturing the environmental region. Herein, it can in particular be taken into account that objects cannot be located in the area occupied by the trailer.
In an embodiment, at least two temporally consecutive images are received, a plurality of image elements is respectively associated with the at least two images, a motion vector is determined for each of the image elements and the image elements are associated with the trailer area depending on their motion vector. Therein, it is in particular provided that the temporally consecutive images are captured by the camera while the motor vehicle is moved. The temporally consecutive images, which are received, are respectively divided into multiple image elements. Therein, each of the image elements includes at least one pixel of the image. Therein, the corresponding image elements of the temporally consecutive images can be compared to each other. This allows determining a motion vector for each of the image elements. The motion vector in particular describes a motion of the object, which is imaged by the image element, between the at least two temporally consecutive images. The motion vector can for example be determined with the aid of an optical flow method. If a motion vector has been determined for each of the image elements, it can be decided whether or not the respective image elements are associated with the trailer area. This allows reliable determination of the trailer area.
Furthermore, it is advantageous if those image elements are associated with the trailer area, the motion vectors of which describe a lower motion speed and/or another direction of motion compared to the remaining motion vectors. Therein, the realization is taken into account that the motion vectors of those image elements, which describe the trailer, describe no or a very low motion. This is substantiated in that the trailer is moved together with the motor vehicle. Thus, the image elements, which describe the trailer, can be differentiated from other image elements, which for example describe objects in the environmental region, in simple manner. Thus, image elements describing the roadway can for example be differentiated from image elements describing the trailer. It can also be examined if the motion vectors of the pixels have another direction of motion compared to the remaining motion vectors. Such outliers among the motion vectors can then also be associated with the trailer area.
Therein, it is in particular provided that temporally consecutive images are continuously received and the trailer area is continuously updated based on the images. As already explained, it is in particular provided that the images are captured during the motion of the motor vehicle. Based on the temporally consecutive images, the motion vector can then be determined for each of the image elements. This allows continuously updating or refining the trailer area. The trailer area can be precisely determined in the images. In this manner, it also becomes possible that the trailer area can be determined for different trailers or trailer types, which can be attached to the motor vehicle.
Furthermore, it is advantageous if a stay area is determined in the at least one image, which describes the trailer with a predetermined likelihood, and the trailer area is determined based on the stay area. The image received by the computing device describes the environmental region of the motor vehicle. Therein, the mounting position of the camera on the motor vehicle as well as the capturing range of the camera are known. Thus, it can be determined, in which area of the image the trailer can be present. Herein, it can for example be taken into account that the trailer is connected to the tow coupling, which is located centrally in the rear area of the motor vehicle. Thus, the stay area, in which the trailer is located with high likelihood, can be taken into account in determining the trailer area. This allows reliable determination of the trailer area.
In a further embodiment, a trailer angle between the motor vehicle and the trailer is continuously determined and the trailer area is determined depending on the trailer angle. The trailer angle in particular describes an angle between the longitudinal axis of the motor vehicle and the longitudinal axis of the trailer. For example, the trailer angle can be determined by a corresponding sensor, which is disposed in the tow coupling of the motor vehicle or the drawbar of the trailer. Furthermore, the trailer can be determined by a corresponding environmental sensor of the motor vehicle. Herein, it can further be provided that the motion of the motor vehicle and in particular a steering angle or rotary rate of the motor vehicle are continuously acquired. In this manner, the trailer angle can be estimated. If the trailer angle is known, it can be determined where the trailer is located relative to the motor vehicle. Thereby, it can also be determined where the picture of the trailer is located within the image. This can be taken into account in determining the trailer area. This allows precise determination of the trailer area.
In an embodiment, for recognizing that the trailer is connected to the motor vehicle, a connection signal is received. The computing device can for example receive this connection signal via a data bus of the motor vehicle. This connection signal can be emitted if the trailer is electrically or mechanically connected to the motor vehicle. Thus, it can be recognized in reliable manner that the trailer is coupled to the motor vehicle.
Furthermore, it is advantageous if the objects in the region of interest are recognized by means of an object recognition algorithm. It can also be provided that a machine vision method is used to recognize the objects in the region of interest. Thus, further traffic participants in the environment of the motor vehicle can for example be reliably recognized. In particular, it becomes possible to recognize pedestrians in the
environmental region. In the same manner, static objects and obstacles, respectively, can also be recognized. The roadway or the roadway surface can also be captured in the region of interest. This can be used for calibrating the settings of the camera. In that the region of interest is determined outside of the trailer area, reliable recognition of the objects can be allowed. In addition, the memory demand can be reduced and the computing effort can be decreased. A computing device according to the invention for a camera system is adapted for performing a method according to the invention and the advantageous configurations thereof. The computing device can for example be a programmable computer such as a digital signal processor, microcontroller or the like. Accordingly, a computer program can be provided, which is for example stored on a storage medium, wherein the program is programmed for executing the method described here when it is executed on the computing device.
A camera system according to the invention for a motor vehicle includes a computing device according to the invention as well as at least one camera. Images of the environmental region can be provided by the camera and transferred to the computing device. Preferably, the camera system includes a plurality of cameras. These cameras can then be disposed distributed at the motor vehicle.
A motor vehicle according to the invention includes a camera system according to the invention. The motor vehicle can be formed as a passenger car or as a utility vehicle.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the computing device according to the invention, to the camera system according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims. Now, the invention is explained in more detail based on preferred embodiments as well as with reference to the attached drawings.
There show:
Fig. 1 a motor vehicle according to an embodiment of the invention, which
comprises a camera system, wherein the motor vehicle is connected to a trailer;
Fig. 2 an image, which is provided by a camera of the camera system, wherein the image is divided into a plurality of image elements, wherein a motion vector is associated with each of the image elements;
Fig. 3 a further image, which is provided by a camera, wherein a region of interest is defined in the image; and
Fig. 4 the image of Fig. 3, in which a region of interest is defined according to an embodiment of the invention.
In the figures, identical or functionally identical elements are provided with the same reference characters.
Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view. In the present case, the motor vehicle 1 is formed as a passenger car. The motor vehicle 1 is connected to a trailer 9. The motor vehicle 1 together with the trailer 9 constitutes a vehicle/trailer combination.
The motor vehicle 1 includes a camera system 2. The camera system 2 in turn includes a computing device 3, which can for example be constituted by an electronic controller of the motor vehicle 1 . Moreover, the camera system 2 includes at least one camera 4. In the present embodiment, the camera system 2 includes four cameras 4, which are disposed distributed at the motor vehicle 1 . Presently, one of the cameras 4 is disposed in a rear area 5, one of the cameras 4 is disposed in a front area 7 of the motor vehicle 1 and the remaining two cameras 4 are disposed in a respective lateral area 6, in particular in an area of the wing mirrors. Presently, the number and arrangement of the cameras 4 of the camera system 2 are to be purely exemplarily understood. An environmental region 8 of the motor vehicle 1 can be captured by the cameras 4. Preferably, the four cameras 4 are formed identical in construction. In particular, images 10 or an image sequence can be provided by the cameras 4, which describe the environmental region 8. These images 10 can be transferred from the cameras 4 to the computing device 3. By means of the computing device 3, a display device of the motor vehicle 1 not illustrated here can be controlled such that the images 10 of the cameras 4 can be displayed to the driver. Based on the images 10, objects can be recognized by the computing device 3. Thus, the camera system 2 serves for assisting the driver of the motor vehicle 1 in driving the motor vehicle 1 . The camera system 2 can for example be a so-called electronic rearview mirror or a parking assistance system or another system.
Fig. 2 shows an image 10, which is provided by one of the cameras 4 of the camera system 2. In particular, the image 10 is provided by the camera 4, which is located in the rear area 5 of the motor vehicle 1 . Therein, the image 10 shows at least a part of the trailer 9. Now, a trailer area 1 1 is to be recognized within the image 10, which is associated with the trailer 9. For this purpose, a plurality of image elements 12 is defined in the image 10. Presently, the image elements 12 are rectangularly or squarely designed. For each of the image elements 12, a motion vector 13 is determined. Hereto, temporally consecutive images 10 are received from the camera 4 by the computing device 3.
Therein, the image elements 12 of the temporally consecutive images 10 are compared to each other. Herefrom, the respective motion vector 13 can then be determined.
Subsequently thereto, the respective motion vectors 13 of the image elements 12 can be compared to each other. Presently, image elements 12 are apparent, which can be associated with a first group 14. These image elements 12 of the first group 14 presently describe a roadway 18, on which the motor vehicle 1 and the trailer 9 move. The motion vectors 13 of these image elements 12 substantially describe the same motion speed and the same direction of motion. Furthermore, image elements 12 are present, which can be associated with a second group 15. The motion vectors 13 of these image elements 12 differ from the motion vectors 13 of the image elements 12 of the first group 14 in particular based on their direction of motion. In addition, image elements 12 can be associated with a third group 16. The motion vectors 13 of these image elements 12 feature in that they describe a lower motion speed compared to the image elements 12 of the first group 14 and of the second group 15. Due to the association of the image elements 12 with the groups 14, 15, 16, the trailer area 1 1 can now be determined, which describes the trailer 9. Herein, it can first be provided that the image elements 12 of the first group 14 are associated with the roadway 18. The image elements 12 of the second group 15 and of the third group 16 can first be associated with the trailer 9. Therein, it is in particular provided that temporally
consecutive images 10 are continuously received and thus the trailer area 1 1 is continuously updated.
Moreover, it is preferably provided that a stay area 17 is determined, which shows the trailer 9 in the image 10 with a high likelihood. In other words, this stay area 17 describes the area, in which the trailer 9 is imaged in the image 10 with high likelihood. Herein, it can for example be taken into account that the trailer 9 is connected to a tow coupling, which is located centrally in the rear area 5 of the motor vehicle 1 . Moreover, it is preferably provided that a trailer angle a is continuously determined. The trailer angle a describes the angle between a longitudinal axis of the motor vehicle 1 and a longitudinal axis of the trailer 9. Hereto, a corresponding sensor can be used. Moreover, the steering angle of the motor vehicle 1 can also be continuously determined and the trailer angle a can be determined herefrom. Based on the trailer angle a, it can be determined where the trailer 9 is located relative to the motor vehicle 1 . The information about the stay area 17 and/or the trailer angle a can be used to correct or refine the association of the image elements 12 with the trailer area 1 1 . For example, it can thus be determined that only the image elements 12 of the third group 16 are to be associated with the trailer area 1 1 .
Fig. 3 shows an image 10, which is provided by the camera 4, according to a further embodiment. Here, a region of interest 19 is determined within the image 10. Objects can then for example be recognized within this region of interest 19 and/or machine vision methods can be applied. Therein, the region of interest 19 is determined such that it encompasses the picture of the roadway 18. In addition, the region of interest 19 excludes areas of the image 10, which describe the motor vehicle 1 . However, presently, the region of interest 19 also encompasses the trailer area 1 1 .
Compared hereto, Fig. 4 shows the image 10 according to Fig. 3 in a further embodiment. Herein, the trailer area 1 1 was taken into account in determining the region of interest 19. Herein, the region of interest 19 is determined such that it is different from the trailer area 1 1 or excludes the trailer area 1 1 . Based on this region of interest 19, object recognition algorithms or machine vision methods can then be applied. Thus, the trailer 9 can for example be prevented from being erroneously recognized as an object. The information with respect to the determined region of interest 19 can also be conveyed to other algorithms or functional facilities, which use the images 10 of the camera 4.

Claims

Claims
1 . Method for capturing an environmental region (8) of a motor vehicle (1 ), in which at least one image (10) captured by a camera (4) is received, wherein the at least one image (10) describes the environmental region (8), a region of interest (19) is determined in the image (10) and objects are captured within the region of interest (19),
characterized in that
if it is recognized that a trailer (9) is connected to the motor vehicle (1 ), a trailer area (1 1 ) is determined in the at least one image (10), which describes the trailer (9), and the region of interest (19) is determined based on the trailer area (1 1 ).
2. Method according to claim 1 ,
characterized in that
the region of interest (19) is determined such that the region of interest (19) is different from the trailer area (1 1 ).
3. Method according to claim 1 or 2,
characterized in that
at least two temporally consecutive images (10) are received, a plurality of image elements (12) is respectively associated with the at least two images (10), a motion vector (13) is determined for each of the image elements (12) and the image elements (12) are associated with the trailer area (1 1 ) depending on their motion vector (13).
4. Method according to claim 3,
characterized in that
those image elements (12) are associated with the trailer area (1 1 ), the motion vectors (13) of which describe a lower motion speed and/or another direction of motion compared to the remaining motion vectors (13).
5. Method according to any one of the preceding claims,
characterized in that
temporally consecutive images (10) are continuously received and the trailer area (1 1 ) is continuously updated based on the images (10).
6. Method according to any one of the preceding claims,
characterized in that
a stay area (17) is determined in the at least one image (10), which describes the trailer (9) with a predetermined likelihood, and the trailer area (1 1 ) is determined based on the stay area (17).
7. Method according to any one of the preceding claims,
characterized in that
a trailer angle (a) between the motor vehicle (1 ) and the trailer (9) is continuously determined and the trailer area (1 1 ) is determined depending on the trailer angle (a).
8. Method according to any one of the preceding claims,
characterized in that
for recognizing that the trailer (9) is connected to the motor vehicle (1 ), a connection signal is received.
9. Method according to any one of the preceding claims,
characterized in that
objects are recognized within the region of interest (19) by means of an object recognition algorithm and/or a machine vision method.
10. Computing device (3) for a camera system (2), which is adapted for performing a method according to any one of the preceding claims.
1 1 . Camera system (2) for a motor vehicle (1 ) including a computing device (3)
according to claim 10 and including at least one camera (4).
12. Motor vehicle (1 ) including a camera system (2) according to claim 1 1 .
PCT/EP2018/050886 2017-01-16 2018-01-15 Method for capturing an environmental region of a motor vehicle with adaptation of a region of interest depending on a trailer, computing device, camera system as well as motor vehicle WO2018130689A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017100669.9A DE102017100669A1 (en) 2017-01-16 2017-01-16 Method for detecting an environmental region of a motor vehicle with adaptation of a region of interest as a function of a trailer, computing device, camera system and motor vehicle
DE102017100669.9 2017-01-16

Publications (1)

Publication Number Publication Date
WO2018130689A1 true WO2018130689A1 (en) 2018-07-19

Family

ID=61003011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/050886 WO2018130689A1 (en) 2017-01-16 2018-01-15 Method for capturing an environmental region of a motor vehicle with adaptation of a region of interest depending on a trailer, computing device, camera system as well as motor vehicle

Country Status (2)

Country Link
DE (1) DE102017100669A1 (en)
WO (1) WO2018130689A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020043423A1 (en) * 2018-08-31 2020-03-05 Connaught Electronics Ltd. Method for determining an angle for ascertaining a pose of a trailer

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019127478A1 (en) * 2019-10-11 2021-04-15 Connaught Electronics Ltd. Determine a trailer orientation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005051804A1 (en) * 2005-10-27 2007-05-03 Daimlerchrysler Ag Driver assistance system for reversing or articulated vehicle has on board display indicating path lines
US20090271078A1 (en) * 2008-04-29 2009-10-29 Mike Dickinson System and method for identifying a trailer being towed by a vehicle
US20110205088A1 (en) * 2010-02-23 2011-08-25 Gm Global Technology Operations, Inc. Park assist system and method
US20140160276A1 (en) * 2012-09-26 2014-06-12 Magna Electronics Inc. Vehicle vision system with trailer angle detection
WO2015193060A1 (en) 2014-06-21 2015-12-23 Valeo Schalter Und Sensoren Gmbh Method for suppressing echo signals of a trailer device on a motor vehicle, driver assistance unit, and motor vehicle

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011113197B4 (en) 2011-09-10 2021-06-10 Volkswagen Aktiengesellschaft Method and device for determining an angle between a towing vehicle and a trailer coupled to it, and the vehicle
DE102012019234A1 (en) 2012-09-29 2014-04-03 Volkswagen Aktiengesellschaft Method for detecting supporting unit mounted on trailer coupling, involves creating image of rear surroundings of motor vehicle, and selecting area of trailer coupling in camera image, where area of camera image is transformed
DE102014005681B4 (en) 2014-04-16 2019-02-21 Audi Ag Method for driver assistance when driving a motor vehicle with a trailer and associated motor vehicle
DE102015201586A1 (en) 2015-01-29 2016-08-04 Volkswagen Aktiengesellschaft Method and device for recognizing a trailer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005051804A1 (en) * 2005-10-27 2007-05-03 Daimlerchrysler Ag Driver assistance system for reversing or articulated vehicle has on board display indicating path lines
US20090271078A1 (en) * 2008-04-29 2009-10-29 Mike Dickinson System and method for identifying a trailer being towed by a vehicle
US20110205088A1 (en) * 2010-02-23 2011-08-25 Gm Global Technology Operations, Inc. Park assist system and method
US20140160276A1 (en) * 2012-09-26 2014-06-12 Magna Electronics Inc. Vehicle vision system with trailer angle detection
WO2015193060A1 (en) 2014-06-21 2015-12-23 Valeo Schalter Und Sensoren Gmbh Method for suppressing echo signals of a trailer device on a motor vehicle, driver assistance unit, and motor vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020043423A1 (en) * 2018-08-31 2020-03-05 Connaught Electronics Ltd. Method for determining an angle for ascertaining a pose of a trailer

Also Published As

Publication number Publication date
DE102017100669A1 (en) 2018-07-19

Similar Documents

Publication Publication Date Title
US10780826B2 (en) Method for determining misalignment of a vehicular camera
US11673605B2 (en) Vehicular driving assist system
US11836989B2 (en) Vehicular vision system that determines distance to an object
US10504241B2 (en) Vehicle camera calibration system
US11417116B2 (en) Vehicular trailer angle detection system
US20220262130A1 (en) Vehicular vision system with object detection
US10462354B2 (en) Vehicle control system utilizing multi-camera module
US11535154B2 (en) Method for calibrating a vehicular vision system
US20190168804A1 (en) Vehicular control system with steering adjustment
US11648877B2 (en) Method for detecting an object via a vehicular vision system
US9619894B2 (en) System and method for estimating vehicle dynamics using feature points in images from multiple cameras
US11875575B2 (en) Vehicular trailering assist system with trailer collision angle detection
US20160162743A1 (en) Vehicle vision system with situational fusion of sensor data
US10744941B2 (en) Vehicle vision system with bird's eye view display
US11608112B2 (en) Vehicular trailering assist system with targetless calibration of trailer cameras
US10445900B1 (en) Vehicle vision system with camera calibration
US20220363250A1 (en) Vehicular driving assistance system with lateral motion control
US20210331680A1 (en) Vehicle driving assistance apparatus
US20220024391A1 (en) Vehicular trailering assist system with hitch ball detection
WO2018130689A1 (en) Method for capturing an environmental region of a motor vehicle with adaptation of a region of interest depending on a trailer, computing device, camera system as well as motor vehicle
US11938870B2 (en) Vehicle compound-eye camera
US20230365134A1 (en) Vehicular trailering assist system with driver monitoring and gesture detection
US20220176960A1 (en) Vehicular control system with vehicle control based on stored target object position and heading information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18700665

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18700665

Country of ref document: EP

Kind code of ref document: A1