WO2019072911A1 - Method for determining a region of interest in an image captured by a camera of a motor vehicle, control system, camera system as well as motor vehicle - Google Patents

Method for determining a region of interest in an image captured by a camera of a motor vehicle, control system, camera system as well as motor vehicle Download PDF

Info

Publication number
WO2019072911A1
WO2019072911A1 PCT/EP2018/077592 EP2018077592W WO2019072911A1 WO 2019072911 A1 WO2019072911 A1 WO 2019072911A1 EP 2018077592 W EP2018077592 W EP 2018077592W WO 2019072911 A1 WO2019072911 A1 WO 2019072911A1
Authority
WO
WIPO (PCT)
Prior art keywords
determined
points
region
image
interest
Prior art date
Application number
PCT/EP2018/077592
Other languages
French (fr)
Inventor
David Hurych
Pavel Krizek
Jiri Kula
Adam Ivansky
Michal Uricar
Original Assignee
Valeo Schalter Und Sensoren Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valeo Schalter Und Sensoren Gmbh filed Critical Valeo Schalter Und Sensoren Gmbh
Publication of WO2019072911A1 publication Critical patent/WO2019072911A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Definitions

  • the present invention relates to a method for determining a region of interest in an image captured by a camera of a motor vehicle, in which the image is received from the camera, in the image, a boundary of a region which images an environment of the motor vehicle is determined on the basis of image points of the image and the region of interest is determined on the basis of the boundary.
  • the present invention relates to a control device for a camera system.
  • the present invention relates to a camera system for a motor vehicle.
  • the present invention relates to a motor vehicle.
  • Such camera systems comprise at least a camera by means of which an image, which describes an environment of the motor vehicle, can be recorded.
  • an image which describes an environment of the motor vehicle
  • objects can be detected in the environment of the motor vehicle on the basis of the recorded images.
  • the images can be subjected to a corresponding object detection algorithm by means of a control device or an image processing device of the camera system.
  • Such camera systems can be used, for example, for pedestrian detection, three-dimensional object detection, for recognizing road markings, for recognizing parking area markings or the like.
  • the region of interest describes the area of the image which describes the environment of the motor vehicle.
  • the region of interest differs from the region of the image which represents parts of the motor vehicle, for example the bumper. This region of interest is different for different vehicle models and for different camera models. According to the prior art, the region of interest is predetermined manually for the specific vehicle model and the camera before the camera system is used.
  • the image is preferably received from the camera.
  • a boundary of a region which, in particular, images an environment of the motor vehicle, is preferably detected by means of image points of the image.
  • the region of interest is preferably determined on the basis of the boundary.
  • a plurality of reference points which, in particular, describe a reference boundary of a predetermined, mean region of interest, is predetermined in the image.
  • a plurality of displacement parameters are preferably determined on the basis of the image points, which in particular each describe a displacement of the reference points to the boundary.
  • the displacement parameters are determined iteratively taking into account the displacement parameters which are predetermined.
  • a displacement vector is preferably determined on the basis of the displacement parameters.
  • the region of interest is determined, in particular, by a displacement of the respective reference points by the parameter vector.
  • a method serves for determining a region of interest in an image which is captured by a camera of a motor vehicle.
  • the image is received from the camera.
  • image points of the image By means of image points of the image, a boundary of a region which images an environment of the motor vehicle is detected in the image, and the region of interest is determined on the basis of the boundary.
  • a plurality of reference points which describe a reference boundary of a predetermined, mean region of interest is predetermined in the image.
  • a plurality of displacement parameters are determined from the image points which respectively describe a displacement of the reference points to the boundary, the displacement parameters are determined iteratively taking into account the previously determined displacement parameters. Based on the displacement parameters, a parameter vector is determined and the region of interest is determined by means of a displacement of the respective reference points by the parameter vector.
  • the method is intended to determine the region of interest in the image recorded with a camera of the motor vehicle or of the camera system.
  • the region of interest describes the region in the image which images the environment of the motor vehicle.
  • the region of interest differs from a region of the image which represents parts of the motor vehicle, for example a bumper or an outer covering part.
  • the method can, for example, be carried out with a control device or an electronic control device of the camera system.
  • the image captured by the camera is received with the control device.
  • the image points or at least some of the image points of the image are examined. It is thereby taken into account that the image points which describe the region of interest or the environment of the motor vehicle differ from the image points which describe parts of the motor vehicle with regards to color, texture and/or brightness.
  • the boundary of the area which describes the environment of the motor vehicle can then be determined. Within this limit, the region of interest can then be defined.
  • the plurality of reference points is predetermined.
  • These reference points describe a previously determined, mean region of interest.
  • This mean region of interest was previously determined in a training phase on the basis of training images.
  • the mean region of interest has been determined based on several different camera models and/or vehicle models.
  • the individual reference points can describe a mean polygon. This means that the mean polygon is obtained when the individual reference points are connected to one another. In particular, a closed curve is obtained by connecting the individual reference points.
  • These reference points are now used as the initial position to determine the actual region of interest in the image.
  • the plurality of displacement parameters is determined from the image points. In this case, it is preferably provided that a displacement parameter is determined for each of the reference points.
  • the respective displacement parameters describe the displacement of the reference points to the boundary. The individual reference points should therefore be moved so that they are on the boundary of the actual region of interest.
  • the respective displacement parameters are determined iteratively.
  • the first displacement parameter is determined based on the image points in the image.
  • the first displacement parameter describes how a first reference point is to be moved in the direction of the boundary.
  • the previously determined first displacement parameter is taken into account.
  • the first and second displacement parameters are taken into account, and so on.
  • a parameter vector can then be determined based on the individual displacement parameters.
  • this parameter vector describes the combination of the individual displacement parameters.
  • the parameter vector can be determined by the addition or multiplication of the individual displacement parameters.
  • the parameter vector describes how the individual reference points are to be moved so that they lie on the boundary of the region of interest.
  • a regression-based method for determining the boundary can be provided based on the mean region of interest or the reference points previously determined in the training phase.
  • the actual region of interest can then be determined in the image from this mean region of interest or the mean polygon.
  • the method can be used for different camera models and/or vehicle models. Overall, the region of interest can thus be determined in a simple and reliable manner.
  • a displacement vector is determined based on the parameter vector, which describes the displacement of the respective reference points in a coordinate system of the image, and for determining the region of interest, the respective reference points are displaced by the displacement vector.
  • the displacement vector is determined, which describes the displacement of the individual reference points in the two-dimensional coordinate system of the image.
  • the displacement vector for each of the reference points can describe the displacement in a first direction (x- direction) and a second direction (y-direction) perpendicular thereto. This means that the displacement parameter has twice as many parameters as the parameter vector.
  • a corresponding matrix can be determined, which maps the parameter vector to the displacement vector.
  • the displacement of the individual reference points can then be carried out by means of the displacement vector. This allows a reliable determination of the boundary of the region of interest in the image.
  • a function is determined which for a group of the image points determines a feature which describes the boundary, and the respective displacement parameters are determined by means of the function.
  • the method can take into account a respective group of image points or a set of image points.
  • the number of image points in the group of image points is arbitrary.
  • the group of image points can be freely selected in the image. However, it is preferably provided that the group of image points is arranged in the region of the reference points.
  • the function can be used to determine the respective displacement parameters. For this purpose, for example, the intensity values of the individual image points can be checked. However, known methods for edge detection or the like can also be used.
  • the boundary of the region of interest can be determined by the color of the image points, the intensity of the image points, the texture of the image points or the like.
  • the function can be used to determine the direction of displacement for the reference point within the group of image points. In this case, it is in particular provided that the displacement for the respective reference point in direction of a normal is determined. For this purpose, the adjacent reference points can also be considered. This function is now used to determine each of the displacement parameters. Thus, the individual displacement parameters can be determined in a reliable manner.
  • a plurality of base shapes are determined for different regions of interest and the mean region of interest is predetermined on the basis of the plurality of base shapes.
  • the present method for determining the actual region of interest is performed online, taking into account information from the previously performed training phase.
  • Different base shapes are determined in this training phase. These base shapes describe the shapes of different regions of interest.
  • the base shapes are determined by different training images from different types of motor vehicles and/or cameras.
  • a plurality of training images can be used, which differ from each other with respect to the vehicle type, the installation position of the camera and/or the design of the camera. Based on these different base shapes, the mean region of interest or the mean polygon can now be determined.
  • the respective base shapes can comprise base points which together describe a polygon. From the respective corresponding base points of the plurality of base shapes, an average value can now be determined in each case in order to determine the reference points.
  • the reference points describe a mean value of different regions of interest, which were determined in the training phase.
  • the reference points and the mean region of interest represent a good starting position for determining the actual region of interest in the image.
  • auxiliary points at a boundary of the region of interest are determined for determining the respective base shapes and a curve is determined by means of the auxiliary points.
  • a training image is used to determine the respective base shapes.
  • This training image describes an image taken with a camera mounted on a motor vehicle.
  • the auxiliary points can be entered manually at the border of the region of interest in the training picture.
  • the individual image points are connected to the curve.
  • a plurality of auxiliary lines is determined in the training image, wherein the auxiliary lines extend from a center point of the region of interest to an edge of the training image, and wherein adjacent auxiliary lines each enclose the same angle and on the basis of the respective intersections between the auxiliary lines and the curve, base points are determined for the base shape.
  • the reference points are determined on the basis of base points which describe the respective base forms.
  • the base form is divided evenly.
  • the plurality of auxiliary lines is determined.
  • the auxiliary lines each extend from the center of the region of interest to the edge of the training image. In this case, two adjacent auxiliary lines each enclose the same angle to each other.
  • the base points can then be determined from the intersections of the respective auxiliary lines and the curve.
  • a plurality of intermediate points are determined on the basis of the base points of the respective base shapes and the reference points, and a reference parameter vector is determined on the basis of the differences between the respective intermediate points and the reference points.
  • several intermediate points are also determined. These intermediate points are determined between the reference points of the mean region of interest and the base points of one of the base shapes. These intermediate points describe the transition or the displacement from the respective reference points to the base points of the base shapes.
  • the respective differences of the intermediate points to the corresponding reference points can be used to describe the displacement of the reference points.
  • the training phase it can be determined how the displacement of the individual reference points takes place. This information can then be used in the online phase of the procedure.
  • the reference parameter vector is determined by means of the respective differences by means of a principal component analysis.
  • the mean region of interest can have, for example, 100 reference points. If now a displacement vector is determined for these reference points, the latter has a length of 200, since this describes the displacement of each reference point in a first direction (x-direction) and in a second direction (y-direction). This would mean that all the 200 values have to be estimated correctly using the image points in order to reliably determine the actual region of interest. This would entail a great calculation effort. In addition, a single error in determining one of the parameters could falsify the result.
  • the known method of the principal component analysis is used to reduce the number of parameters.
  • the number of parameters can be reduced to six parameters. These parameters can then be described in the reference parameter vector.
  • the displacement parameters are determined by means of the reference parameter vector.
  • the reference parameter vector can be determined from the individual reference points in the intermediate points. For this purpose, different training matrices, for example, can be used. These training matrices can be determined iteratively. Based on the training matrices, the reference parameter vector can then be determined. Based on the reference parameter vector, regression matrices can then be determined. These regression matrices can then be taken into account when determining the respective displacement parameters in the online mode.
  • a control device for a camera system of a motor vehicle is adapted for performing a method according to the invention and the advantageous embodiments thereof.
  • the control device may be an electronic control unit (ECU) of the motor vehicle.
  • the control device can also be provided by a computing device, a microprocessor, a digital signal processor or the like.
  • the control device can receive the image or images taken with the camera.
  • an object detection algorithm can be implemented by means of the control device in order to detect features which describe the boundary of the region of interest in the image points of the image.
  • the region of interest can then be determined on the basis of the predetermined reference points for the mean region of interest or the mean polygon.
  • a camera system for a motor vehicle according to the invention comprises a control device according to the invention.
  • the camera system includes at least one camera.
  • the camera system comprises a plurality of cameras, which can be arranged distributed, for example, on the motor vehicle.
  • the respective cameras are connected to the control device via a data line for data transmission or transmission of the images.
  • a motor vehicle according to the invention comprises a camera system according to the invention.
  • the motor vehicle is preferably designed as a passenger car. It may also be provided that the motor vehicle is designed as a commercial vehicle.
  • Fig. 1 a motor vehicle according to an embodiment of the invention which comprises a camera system
  • Fig. 2 a training image in which a plurality of auxiliary points are determined manually at a boundary of a region of interest
  • Fig. 3 the training image according to Fig. 2, in which a curve is determined on the basis of the plurality of auxiliary points;
  • Fig. 4 the training image according to Fig. 3, in which a plurality of base points are determined for a base shape by means of a plurality of auxiliary lines and the curve;
  • Fig. 5 respective base points for three different base shapes as well as reference points of a mean region of interest, which has been determined on the basis of the respective base points;
  • Fig. 6 respective intermediate points which have been determined between the
  • Fig. 7 respective intermediate points which have been determined between the
  • Fig. 8 respective intermediate points which have been determined between the
  • Fig. 9 an image captured by a camera of the motor vehicle in which the boundary of a region of interest is determined by means of the reference points of the mean region of interest.
  • Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view.
  • the motor vehicle 1 is designed as a passenger car.
  • the motor vehicle 1 comprises a camera system 2.
  • the camera system 2 in turn comprises a control device 3, which can be formed, for example, by an electronic control device of the motor vehicle 1 .
  • the camera system 2 comprises at least one camera 4.
  • the camera system 2 comprises four cameras 4 which are arranged distributed on the motor vehicle 1 .
  • one of the cameras 4 is arranged in a rear region 5 of the motor vehicle 1
  • one of the cameras 4 is arranged in a front region 7 of the motor vehicle 1
  • the other two cameras 4 are arranged in a respective side region 6, in particular in a region of the side mirrors.
  • the number and arrangement of the cameras 4 of the camera system 2 is to be understood as purely exemplary.
  • An environment 8 or an environmental region of the motor vehicle 1 can be detected with the cameras 4.
  • the four cameras 4 are preferably of identical construction.
  • a sequence of images 9 can be provided with the cameras 4, which describe the environment 8.
  • These images 9 or image data can be transmitted from the cameras 4 to the control device 3.
  • a display device (not shown) of the motor vehicle 1 can be controlled so that the images 9 of the cameras 4 can be displayed to the driver.
  • the camera system 2 thus serves to support the driver of the motor vehicle 1 while driving the motor vehicle 1 .
  • the camera system 2 can, for example, be a so-called electronic rear mirror or a parking assistance system or other system.
  • the camera system 2 can recognize object in the environment 8 of the motor vehicle 1 .
  • a region of interest 1 1 is to be determined in the images
  • the region of interest 1 1 describes the region of the image 9 which images the environment 8 of the motor vehicle 1 .
  • the region of interest 1 1 differs from a region 10 in the image 9, which images parts of the motor vehicle 1 .
  • a plurality of reference points 15 are provided. These reference points 15 describe a mean region of interest, which was previously determined in a training phase.
  • the reference points 15 can be connected to a mean polygon.
  • displaced points 19 are described which describe a boundary 20 of the region of interest 1 1 .
  • the boundary 20 is between the region of interest 1 1 and the region
  • / describes a function for the calculation of features, which outputs a vector of feature values.
  • the two-dimensional image points are taken into account in groups or sets which are shifted by the transformation p.
  • the displacement parameters are determined iteratively. This means that, in the determination of i-th displacement parameter, the previously determined displacement parameters
  • a parameter vector p can then be determined from the individual displacement parameters. Operation ⁇ > is used to suitably combine the displacement parameters. Depending on the transformation p, this operation can be a simple addition, a multiplication or the like. The function / can be determined freely. The intensity of the individual image points can be taken into account. It can also be provided that an algorithm is used for the detection of features which determines the features via the set of image points which are transformed with p.
  • the resulting parameter vector p describes the deformation of the mean polygon of the mean region of interest to a suitable polygon for the region of interest 1 1 of the current image 9. Furthermore, the parameter vector p is transformed into a displacement vector t which is the displacement of the real pixels of the polygon with respect to the mean polygon. The displacement vector t thus describes the displacement of the individual reference points 1 5 in the coordinate system of the image 9. For this purpose, the parameter vector is multiplied by a matrix U' which transforms the parameters in the space of the image points. This can be described with the following formula:
  • the displacement vector t of the displacement parameters encodes the two-dimensional motion from each point in the polygon. This means that the displacement vector
  • t has twice as many parameters as there are points in the curve defining the polygon.
  • t m describes the mean parameter vector and ⁇ is the vector of the displacement variances.
  • the operation * describes an element wise multiplication of two vectors resulting in a vector of the same length.
  • the transformation p with the inputs ⁇ , ⁇ applies the equation (2) and moves the image points in the set using the parameters in the displacement vector t and a mean polygon shape Y m . How the displacement parameters are applied to the image points depends on the type of the characteristics used and can be chosen as desired.
  • the polygon for the region of interest 1 1 recognized in a current image 9 can be described by the following equation:
  • the training data are determined in a training phase, which is carried out offline.
  • the regression matrices and the matrix U' which maps the low-dimensional deformation parameters to the displacement of the points of the polygon, are determined.
  • Fig. 2 shows a training image 9' which is provided with a camera 4 of a motor vehicle 1 .
  • the training image 9' comprises an area 10 which represents parts of the motor vehicle 1 .
  • the training image 9' comprises a region of interest 1 1 , which differs from region 10.
  • a plurality of auxiliary points 12 are defined manually at the boundary between the region of interest 1 1 and the region 10. In this case, it is sufficient for the auxiliary points 12 to be determined for a single training image 9', since the region of interest 1 1 does not change dependent on the time.
  • auxiliary points 12 are specified manually in the training phase, they can differ in their number and position. To determine reliable training data, it is necessary that the auxiliary points 12 be the same for a type of region of interest 1 1 .
  • a type of the region of interest 1 1 describes the images of the same camera 4 or of the motor vehicle 1 for different scenes.
  • Fig. 3 shows the training image 9' according to Fig. 2, in which the auxiliary points 12 are connected to a curve 13.
  • a spline curve can be used or a linear function can also be used in parts.
  • an approach for the auxiliary points 12 can be determined at each point of the entire region of interest 1 1 .
  • the curve 13 is used to determine a plurality of base points 14.
  • respective auxiliary lines 16 are defined which extend from a center region of the region of interest 1 1 to an edge region.
  • the auxiliary lines 16 are determined such that in each case two adjacent auxiliary lines 16 enclose the same angle.
  • the intersections of the respective auxiliary lines 16 with the curve 13 then describe the base points 14.
  • the base points 14 describe a base form which is a polygon. This is illustrated in Fig. 4.
  • base points 14 are then used as so-called ground-truth annotations. In this case, is used as training pair for the j-th image with the associated polygon of the two- dimensional points stored in a vector
  • Fig. 5 shows a first base shape with the first base points 14, which correspond to the base points from Fig. 4.
  • a second base shape with the second base points 14' and a third base shape with the third base points 14" is shown.
  • the second base points 14' and the third base points 14" were determined by means of further training images 9' from other vehicle models and/or camera models.
  • the number and forms of these base shapes are then determined by means of an algorithm.
  • the mean shape Y m which is represented by the reference points 15, is then determined.
  • Fig. 6 shows intermediate points 16, 16', 16" and the corresponding forms which have been determined between the first base shape represented by the base points 14 and the mean shape described by the reference points 15.
  • Fig. 7 shows intermediate points 17, 17', 17" and the corresponding forms which have been determined between the second base shape, which is represented by the base points 14', and the mean shape represented by the reference points 15.
  • Fig. 8 shows intermediate points 18, 18', 18" and the corresponding forms which have been determined between the third base shape, which is described by the base points 14" and the mean shape, which is represented by the reference points 15.
  • T describes the matrix, which is composed of all normalized parameters of the training for all n training samples:
  • the vector t m is the mean displacement vector, and ⁇ is the vector of the parameter variances.
  • the resulting matrix U is an orthogonal rotation matrix. Only the first b singular values on the diagonal of the matrix ⁇ are significant, where b « I. Therefore, only the first b columns of the matrix U are taken into account and stored in the matrix U' ⁇
  • the parameter vector p with the six displacement parameters is received from the j-th sample as .
  • all the parameters are received from the j-th sample as .
  • the feature vector for a single image 9 can be stored in .
  • the training matrices for the individual regression matrices are determined as follows:
  • the determination of the training matrices for the next training of the regression matrix depends on the previously determined training matrices:
  • the learning of the individual regression matrices is an optimization problem solved by the least squares method:
  • points 19 for a region of interest 1 1 in an image 9 can then be determined taking the equations (1 ), (2) and (3) into account. This is illustrated by way of example in Fig. 9, where the points 19 for the region of interest 1 1 are determined from the reference points 15 of the mean shape by means of the curve adjustment.
  • the points 19 result from the adaptation by means of the regression matrices
  • a regression-based method of machine learning can be provided, with which a function is provided which maps the reference points 1 5 directly to the boundary 20 of the region of interest 1 1 .
  • This method can also be performed for several images provided with the camera 4.
  • the method provides a polygon comprising the 2-dimensional points 1 9. These points 19 and the polygon respectively lie on the boundary 20 between the region of interest 1 1 and the region 10.
  • the region of interest 1 1 can be reliably

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for determining a region of interest (11) in an image (9) which is captured by a camera (4) of a motor vehicle (1), in which the image (9) is received from the camera (4), by means of image points of the image (9), a boundary (20) of a region which images an environment (8) of the motor vehicle (1) is detected in the image (9), and the region of interest (11) is determined on the basis of the boundary, wherein a plurality of reference points (15) which describe a reference boundary of a predetermined, mean region of interest is predetermined in the image, a plurality of displacement parameters are determined from the image points which respectively describe a displacement of the reference points (15) to the boundary (20), the displacement parameters are determined iteratively taking into account the previously determined displacement parameters, based on the displacement parameters, a parameter vector is determined and the region of interest (11) is determined by means of a displacement of the respective reference points (15) by the parameter vector.

Description

Method for determining a region of interest in an image captured by a camera of a motor vehicle, control system, camera system as well as motor vehicle
The present invention relates to a method for determining a region of interest in an image captured by a camera of a motor vehicle, in which the image is received from the camera, in the image, a boundary of a region which images an environment of the motor vehicle is determined on the basis of image points of the image and the region of interest is determined on the basis of the boundary. Moreover, the present invention relates to a control device for a camera system. Furthermore, the present invention relates to a camera system for a motor vehicle. Finally, the present invention relates to a motor vehicle.
Presently, the interest is directed to camera systems for motor vehicles. Such camera systems comprise at least a camera by means of which an image, which describes an environment of the motor vehicle, can be recorded. By means of the camera system, objects can be detected in the environment of the motor vehicle on the basis of the recorded images. For this purpose, the images can be subjected to a corresponding object detection algorithm by means of a control device or an image processing device of the camera system. Such camera systems can be used, for example, for pedestrian detection, three-dimensional object detection, for recognizing road markings, for recognizing parking area markings or the like.
In order to reliably operate the camera systems, it is necessary to determine a so-called region of interest (ROI) in the images. The region of interest describes the area of the image which describes the environment of the motor vehicle. The region of interest differs from the region of the image which represents parts of the motor vehicle, for example the bumper. This region of interest is different for different vehicle models and for different camera models. According to the prior art, the region of interest is predetermined manually for the specific vehicle model and the camera before the camera system is used.
It is an object of the present invention to provide a solution how in an image captured by a camera of a motor vehicle a region of interest can be determined more easily and reliably.
According to the invention this object is solved by a method, by a control device, by a camera system and by a motor vehicle having the features according to the respective independent claims. Advantageous further developments of the present invention are the subject matter of the dependent claims.
According to one embodiment of a method for determining a region of interest in an image captured by a camera of a motor vehicle, the image is preferably received from the camera. In addition, in the image a boundary of a region which, in particular, images an environment of the motor vehicle, is preferably detected by means of image points of the image. In addition, the region of interest is preferably determined on the basis of the boundary. In this case, it is preferably provided that a plurality of reference points which, in particular, describe a reference boundary of a predetermined, mean region of interest, is predetermined in the image. In addition, a plurality of displacement parameters are preferably determined on the basis of the image points, which in particular each describe a displacement of the reference points to the boundary. In this case, it is preferably provided that the displacement parameters are determined iteratively taking into account the displacement parameters which are predetermined. Furthermore, a displacement vector is preferably determined on the basis of the displacement parameters.
Furthermore, the region of interest is determined, in particular, by a displacement of the respective reference points by the parameter vector.
A method according to the invention serves for determining a region of interest in an image which is captured by a camera of a motor vehicle. The image is received from the camera. By means of image points of the image, a boundary of a region which images an environment of the motor vehicle is detected in the image, and the region of interest is determined on the basis of the boundary. Furthermore, it is provided that a plurality of reference points which describe a reference boundary of a predetermined, mean region of interest is predetermined in the image. In addition, a plurality of displacement parameters are determined from the image points which respectively describe a displacement of the reference points to the boundary, the displacement parameters are determined iteratively taking into account the previously determined displacement parameters. Based on the displacement parameters, a parameter vector is determined and the region of interest is determined by means of a displacement of the respective reference points by the parameter vector.
The method is intended to determine the region of interest in the image recorded with a camera of the motor vehicle or of the camera system. The region of interest describes the region in the image which images the environment of the motor vehicle. In particular, the region of interest differs from a region of the image which represents parts of the motor vehicle, for example a bumper or an outer covering part. The method can, for example, be carried out with a control device or an electronic control device of the camera system. The image captured by the camera is received with the control device. To determine the region of interest, the image points or at least some of the image points of the image are examined. It is thereby taken into account that the image points which describe the region of interest or the environment of the motor vehicle differ from the image points which describe parts of the motor vehicle with regards to color, texture and/or brightness. By examining the image points, the boundary of the area which describes the environment of the motor vehicle can then be determined. Within this limit, the region of interest can then be defined.
According to an essential aspect of the present invention, it is provided that within the image the plurality of reference points is predetermined. These reference points describe a previously determined, mean region of interest. This mean region of interest was previously determined in a training phase on the basis of training images. In particular, the mean region of interest has been determined based on several different camera models and/or vehicle models. The individual reference points can describe a mean polygon. This means that the mean polygon is obtained when the individual reference points are connected to one another. In particular, a closed curve is obtained by connecting the individual reference points. These reference points are now used as the initial position to determine the actual region of interest in the image. For this purpose, the plurality of displacement parameters is determined from the image points. In this case, it is preferably provided that a displacement parameter is determined for each of the reference points. The respective displacement parameters describe the displacement of the reference points to the boundary. The individual reference points should therefore be moved so that they are on the boundary of the actual region of interest.
In this case, it is provided that the respective displacement parameters are determined iteratively. This means that the previously determined displacement parameters are taken into account when determining a displacement parameter. The first displacement parameter is determined based on the image points in the image. The first displacement parameter describes how a first reference point is to be moved in the direction of the boundary. In the determination of the second displacement parameter, the previously determined first displacement parameter is taken into account. In determining the third displacement parameter, the first and second displacement parameters are taken into account, and so on. A parameter vector can then be determined based on the individual displacement parameters. In particular, this parameter vector describes the combination of the individual displacement parameters. For example, the parameter vector can be determined by the addition or multiplication of the individual displacement parameters. The parameter vector describes how the individual reference points are to be moved so that they lie on the boundary of the region of interest. Thus, a regression-based method for determining the boundary can be provided based on the mean region of interest or the reference points previously determined in the training phase. The actual region of interest can then be determined in the image from this mean region of interest or the mean polygon. Thus, the method can be used for different camera models and/or vehicle models. Overall, the region of interest can thus be determined in a simple and reliable manner.
Preferably, a displacement vector is determined based on the parameter vector, which describes the displacement of the respective reference points in a coordinate system of the image, and for determining the region of interest, the respective reference points are displaced by the displacement vector. Based on the parameter vector, the displacement vector is determined, which describes the displacement of the individual reference points in the two-dimensional coordinate system of the image. For example, the displacement vector for each of the reference points can describe the displacement in a first direction (x- direction) and a second direction (y-direction) perpendicular thereto. This means that the displacement parameter has twice as many parameters as the parameter vector. In order to determine the displacement parameter, a corresponding matrix can be determined, which maps the parameter vector to the displacement vector. On the basis of a
description of the reference points or of the mean polygon in the coordinate system of the image, the displacement of the individual reference points can then be carried out by means of the displacement vector. This allows a reliable determination of the boundary of the region of interest in the image.
In a further embodiment, a function is determined which for a group of the image points determines a feature which describes the boundary, and the respective displacement parameters are determined by means of the function. The method can take into account a respective group of image points or a set of image points. The number of image points in the group of image points is arbitrary. The group of image points can be freely selected in the image. However, it is preferably provided that the group of image points is arranged in the region of the reference points. The function can be used to determine the respective displacement parameters. For this purpose, for example, the intensity values of the individual image points can be checked. However, known methods for edge detection or the like can also be used. The boundary of the region of interest can be determined by the color of the image points, the intensity of the image points, the texture of the image points or the like. The function can be used to determine the direction of displacement for the reference point within the group of image points. In this case, it is in particular provided that the displacement for the respective reference point in direction of a normal is determined. For this purpose, the adjacent reference points can also be considered. This function is now used to determine each of the displacement parameters. Thus, the individual displacement parameters can be determined in a reliable manner.
In a further embodiment, in a training phase a plurality of base shapes are determined for different regions of interest and the mean region of interest is predetermined on the basis of the plurality of base shapes. The present method for determining the actual region of interest is performed online, taking into account information from the previously performed training phase. Different base shapes are determined in this training phase. These base shapes describe the shapes of different regions of interest. In particular, it is provided that the base shapes are determined by different training images from different types of motor vehicles and/or cameras. Thus, in the training phase, a plurality of training images can be used, which differ from each other with respect to the vehicle type, the installation position of the camera and/or the design of the camera. Based on these different base shapes, the mean region of interest or the mean polygon can now be determined. The respective base shapes can comprise base points which together describe a polygon. From the respective corresponding base points of the plurality of base shapes, an average value can now be determined in each case in order to determine the reference points. Thus, the reference points describe a mean value of different regions of interest, which were determined in the training phase. Thus, the reference points and the mean region of interest represent a good starting position for determining the actual region of interest in the image.
It is preferably provided that in the training image auxiliary points at a boundary of the region of interest are determined for determining the respective base shapes and a curve is determined by means of the auxiliary points. A training image is used to determine the respective base shapes. This training image describes an image taken with a camera mounted on a motor vehicle. In this training picture, the auxiliary points can be entered manually at the border of the region of interest in the training picture. To determine the base shapes for this training image, the individual image points are connected to the curve. In a further embodiment, a plurality of auxiliary lines is determined in the training image, wherein the auxiliary lines extend from a center point of the region of interest to an edge of the training image, and wherein adjacent auxiliary lines each enclose the same angle and on the basis of the respective intersections between the auxiliary lines and the curve, base points are determined for the base shape. As already explained, the reference points are determined on the basis of base points which describe the respective base forms. In order to determine the base points, it is provided that the base form is divided evenly. For this purpose, the plurality of auxiliary lines is determined. The auxiliary lines each extend from the center of the region of interest to the edge of the training image. In this case, two adjacent auxiliary lines each enclose the same angle to each other. The base points can then be determined from the intersections of the respective auxiliary lines and the curve. Thus, it can be achieved in a permissible manner that uniformly distributed base points are determined on the basis of the basis of.
Furthermore, it is advantageous if a plurality of intermediate points are determined on the basis of the base points of the respective base shapes and the reference points, and a reference parameter vector is determined on the basis of the differences between the respective intermediate points and the reference points. In the training phase, several intermediate points are also determined. These intermediate points are determined between the reference points of the mean region of interest and the base points of one of the base shapes. These intermediate points describe the transition or the displacement from the respective reference points to the base points of the base shapes. Thus, the respective differences of the intermediate points to the corresponding reference points can be used to describe the displacement of the reference points. Thus, in the training phase it can be determined how the displacement of the individual reference points takes place. This information can then be used in the online phase of the procedure.
In a further embodiment, the reference parameter vector is determined by means of the respective differences by means of a principal component analysis. The mean region of interest can have, for example, 100 reference points. If now a displacement vector is determined for these reference points, the latter has a length of 200, since this describes the displacement of each reference point in a first direction (x-direction) and in a second direction (y-direction). This would mean that all the 200 values have to be estimated correctly using the image points in order to reliably determine the actual region of interest. This would entail a great calculation effort. In addition, a single error in determining one of the parameters could falsify the result. In the present case, use is made of the fact that the movement or displacement of one of the reference points is not independent of the displacement of the adjacent points since these together form a polygon or a curve. For this purpose, the known method of the principal component analysis is used to reduce the number of parameters. Using the principal component analysis, for example, the number of parameters can be reduced to six parameters. These parameters can then be described in the reference parameter vector. By reducing the parameters, a significantly robust algorithm can be provided, which is less error-prone.
In this case, it is preferably provided that regression matrices for the respective
displacement parameters are determined by means of the reference parameter vector. The reference parameter vector can be determined from the individual reference points in the intermediate points. For this purpose, different training matrices, for example, can be used. These training matrices can be determined iteratively. Based on the training matrices, the reference parameter vector can then be determined. Based on the reference parameter vector, regression matrices can then be determined. These regression matrices can then be taken into account when determining the respective displacement parameters in the online mode.
A control device according to the invention for a camera system of a motor vehicle is adapted for performing a method according to the invention and the advantageous embodiments thereof. The control device may be an electronic control unit (ECU) of the motor vehicle. The control device can also be provided by a computing device, a microprocessor, a digital signal processor or the like. The control device can receive the image or images taken with the camera. Furthermore, an object detection algorithm can be implemented by means of the control device in order to detect features which describe the boundary of the region of interest in the image points of the image. By means of the control device, the region of interest can then be determined on the basis of the predetermined reference points for the mean region of interest or the mean polygon.
Furthermore, a computer program can be provided which is stored, for example, on a storage medium, the computer program being designed to perform the method described here when it is executed on the control device or a computing device of the control device. A camera system for a motor vehicle according to the invention comprises a control device according to the invention. In addition, the camera system includes at least one camera. Preferably, the camera system comprises a plurality of cameras, which can be arranged distributed, for example, on the motor vehicle. The respective cameras are connected to the control device via a data line for data transmission or transmission of the images. By means of the camera system, objects in the surroundings of the motor vehicle can then be recognized within the region of interest.
A motor vehicle according to the invention comprises a camera system according to the invention. The motor vehicle is preferably designed as a passenger car. It may also be provided that the motor vehicle is designed as a commercial vehicle.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the control device according to the invention, the camera system according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.
In the following, the invention is explained in more detail based on preferred embodiments as well as with reference to the attached drawings.
These show in:
Fig. 1 a motor vehicle according to an embodiment of the invention which comprises a camera system; Fig. 2 a training image in which a plurality of auxiliary points are determined manually at a boundary of a region of interest;
Fig. 3 the training image according to Fig. 2, in which a curve is determined on the basis of the plurality of auxiliary points;
Fig. 4 the training image according to Fig. 3, in which a plurality of base points are determined for a base shape by means of a plurality of auxiliary lines and the curve;
Fig. 5 respective base points for three different base shapes as well as reference points of a mean region of interest, which has been determined on the basis of the respective base points;
Fig. 6 respective intermediate points which have been determined between the
reference points of the mean region of interest and the base points of a first base shape;
Fig. 7 respective intermediate points which have been determined between the
reference points of the mean region of interest and the base points of a second base shape;
Fig. 8 respective intermediate points which have been determined between the
reference points of the mean region of interest and the base points of a third base shape;
Fig. 9 an image captured by a camera of the motor vehicle in which the boundary of a region of interest is determined by means of the reference points of the mean region of interest.
In the figures, identical or functionally identical elements are provided with the same reference characters.
Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view. In the present case, the motor vehicle 1 is designed as a passenger car. The motor vehicle 1 comprises a camera system 2. The camera system 2 in turn comprises a control device 3, which can be formed, for example, by an electronic control device of the motor vehicle 1 . In addition, the camera system 2 comprises at least one camera 4. In the present exemplary embodiment, the camera system 2 comprises four cameras 4 which are arranged distributed on the motor vehicle 1 . In the present case, one of the cameras 4 is arranged in a rear region 5 of the motor vehicle 1 , one of the cameras 4 is arranged in a front region 7 of the motor vehicle 1 , and the other two cameras 4 are arranged in a respective side region 6, in particular in a region of the side mirrors. The number and arrangement of the cameras 4 of the camera system 2 is to be understood as purely exemplary.
An environment 8 or an environmental region of the motor vehicle 1 can be detected with the cameras 4. The four cameras 4 are preferably of identical construction. In particular, a sequence of images 9 can be provided with the cameras 4, which describe the environment 8. These images 9 or image data can be transmitted from the cameras 4 to the control device 3. By means of the control device 3, a display device (not shown) of the motor vehicle 1 can be controlled so that the images 9 of the cameras 4 can be displayed to the driver. The camera system 2 thus serves to support the driver of the motor vehicle 1 while driving the motor vehicle 1 . The camera system 2 can, for example, be a so-called electronic rear mirror or a parking assistance system or other system. In particular, the camera system 2 can recognize object in the environment 8 of the motor vehicle 1 .
By means of the control device 3, a region of interest 1 1 is to be determined in the images
9. The region of interest 1 1 describes the region of the image 9 which images the environment 8 of the motor vehicle 1 . The region of interest 1 1 differs from a region 10 in the image 9, which images parts of the motor vehicle 1 .
To determine the region of interest 1 1 , a plurality of reference points 15 are provided. These reference points 15 describe a mean region of interest, which was previously determined in a training phase. The reference points 15 can be connected to a mean polygon. By means of a method which is carried out with the control device 3, it is now determined in an online phase how these reference points 15 are to be displaced so that they describe the region of interest 1 1 in the image 9. Thus, by means of the reference points 15, displaced points 19 are described which describe a boundary 20 of the region of interest 1 1 . Here, the boundary 20 is between the region of interest 1 1 and the region
10. This is shown in Fig. 9. This method can be used when the region of interest 1 1 is to be determined for the first time for a motor vehicle 1 and / or a specific camera 8. To determine the displacement of the individual reference points 1 5, displacement parameters are determined or estimated in the online phase. The estimation of the displacement parameters for the deformation of the region of interest 1 1 can be described by the following equation:
Figure imgf000013_0001
Here / describes a function for the calculation of features, which outputs a vector of feature values. The two-dimensional image points are taken into account in groups or sets which are shifted by the transformation p. As can be seen from equation ( 1 ) , the displacement parameters are determined iteratively. This means that, in the determination of i-th displacement parameter, the previously determined displacement parameters
are taken into account. This sequence of displacement parameters is
Figure imgf000013_0002
estimated from a single image 9 and is not dependent on time or changing images. A parameter vector p can then be determined from the individual displacement parameters. Operation <> is used to suitably combine the displacement parameters. Depending on the transformation p, this operation can be a simple addition, a multiplication or the like. The function / can be determined freely. The intensity of the individual image points can be taken into account. It can also be provided that an algorithm is used for the detection of features which determines the features via the set of image points which are transformed with p.
The resulting parameter vector p describes the deformation of the mean polygon of the mean region of interest to a suitable polygon for the region of interest 1 1 of the current image 9. Furthermore, the parameter vector p is transformed into a displacement vector t which is the displacement of the real pixels of the polygon with respect to the mean polygon. The displacement vector t thus describes the displacement of the individual reference points 1 5 in the coordinate system of the image 9. For this purpose, the parameter vector is multiplied by a matrix U' which transforms the parameters in the space of the image points. This can be described with the following formula:
Figure imgf000014_0001
The displacement vector t of the displacement parameters encodes the two-dimensional motion from each point in the polygon. This means that the displacement vector
Figure imgf000014_0004
t has twice as many parameters as there are points in the curve defining the polygon. tm describes the mean parameter vector and σ is the vector of the displacement variances. The operation * describes an element wise multiplication of two vectors resulting in a vector of the same length. The transformation p with the inputs Χ^, ρ^ applies the equation (2) and moves the image points in the set using the parameters in the displacement vector t and a mean polygon shape Ym. How the displacement parameters are applied to the image points depends on the type of the characteristics used and can be chosen as desired.
The polygon for the region of interest 1 1 recognized in a current image 9 can be described by the following equation:
Figure imgf000014_0002
In the following it will be explained, how the variables can be
Figure imgf000014_0003
determined on the basis of training data. The training data are determined in a training phase, which is carried out offline. The regression matrices and the matrix U', which maps the low-dimensional deformation parameters to the displacement of the points of the polygon, are determined.
Fig. 2 shows a training image 9' which is provided with a camera 4 of a motor vehicle 1 . The training image 9' comprises an area 10 which represents parts of the motor vehicle 1 . In addition, the training image 9' comprises a region of interest 1 1 , which differs from region 10. In the training image 9', a plurality of auxiliary points 12 are defined manually at the boundary between the region of interest 1 1 and the region 10. In this case, it is sufficient for the auxiliary points 12 to be determined for a single training image 9', since the region of interest 1 1 does not change dependent on the time.
Since the auxiliary points 12 are specified manually in the training phase, they can differ in their number and position. To determine reliable training data, it is necessary that the auxiliary points 12 be the same for a type of region of interest 1 1 . A type of the region of interest 1 1 describes the images of the same camera 4 or of the motor vehicle 1 for different scenes.
Fig. 3 shows the training image 9' according to Fig. 2, in which the auxiliary points 12 are connected to a curve 13. In order to determine the curve 13, a spline curve can be used or a linear function can also be used in parts. Thus, an approach for the auxiliary points 12 can be determined at each point of the entire region of interest 1 1 .
Subsequently, the curve 13 is used to determine a plurality of base points 14. For this purpose, respective auxiliary lines 16 are defined which extend from a center region of the region of interest 1 1 to an edge region. In this case, the auxiliary lines 16 are determined such that in each case two adjacent auxiliary lines 16 enclose the same angle. The intersections of the respective auxiliary lines 16 with the curve 13 then describe the base points 14. The base points 14 describe a base form which is a polygon. This is illustrated in Fig. 4.
These base points 14 are then used as so-called ground-truth annotations. In this case,
Figure imgf000015_0001
is used as training pair for the j-th image with the associated polygon of the two- dimensional points stored in a vector
Figure imgf000015_0002
The automatic verification of the ground-truth annotation and the extraction of the base shapes are carried out. For example, it can be provided that a plurality of sequences are carried out in order to determine base points 14, 14', 14". For this purpose, Fig. 5 shows a first base shape with the first base points 14, which correspond to the base points from Fig. 4. In addition, a second base shape with the second base points 14' and a third base shape with the third base points 14" is shown. The second base points 14' and the third base points 14" were determined by means of further training images 9' from other vehicle models and/or camera models. The number and forms of these base shapes are then determined by means of an algorithm. On the basis of the three base shapes, the mean shape Ym, which is represented by the reference points 15, is then determined.
In addition, it is provided that disturbances are determined on the basis of all the particular base shapes. For this purpose, respective intermediate points 16, 16', 16", 17, 17', 17", 18, 18', 18" are determined. For this purpose, Fig. 6 shows intermediate points 16, 16', 16" and the corresponding forms which have been determined between the first base shape represented by the base points 14 and the mean shape described by the reference points 15. Fig. 7 shows intermediate points 17, 17', 17" and the corresponding forms which have been determined between the second base shape, which is represented by the base points 14', and the mean shape represented by the reference points 15. Finally, Fig. 8 shows intermediate points 18, 18', 18" and the corresponding forms which have been determined between the third base shape, which is described by the base points 14" and the mean shape, which is represented by the reference points 15.
These points 16, 16', 16", 17, 17', 17", 18, 18', 18" or the shapes described thereby serve as meaningful training forms for determining the regression matrices
Figure imgf000016_0003
The deformation of the associated polygons or shapes can then be determined by means of the parameter vector t = - Ym, which has the length I.
For example, if 100 reference points 15 are used for the region of interest 1 1 to determine the shape or the polygon, they would be described in a displacement vector t having a length I = 200, since this describes the displacement in the x-direction and y-direction. This means that all 200 values must be correctly estimated on the basis of the image data in order to be able to determine a good form for the region of interest 1 1 in the image 9. This can cause difficulties, since errors in the estimation of one of the parameters can already distort the result, and there are too many parameters which must be estimated. In the present case, it is considered that the movement of one of the reference points 15 is not independent of the movement of the adjacent reference points 15 since it is a polygon or a curve. Therefore, the principal component analysis is used to reduce the number of parameters. According to the principal component analysis, six parameters are obtained in a reference parameter vector, by means of which the entire deformation of a curve can be determined. Thereafter, the algorithm becomes more robust and less error-prone.
The principal component analysis is performed on the basis of a singular value
decomposition svd:
Figure imgf000016_0001
T describes the matrix, which is composed of all normalized parameters of the training for all n training samples:
Figure imgf000016_0002
The vector tm is the mean displacement vector, and σ is the vector of the parameter variances. The resulting matrix U is an orthogonal rotation matrix. Only the first b singular values on the diagonal of the matrix∑ are significant, where b « I. Therefore, only the first b columns of the matrix U are taken into account and stored in the matrix U'\
Figure imgf000017_0001
The parameter vector p with the six displacement parameters is received from the j-th sample as . In order to train the regression matrix, all the
Figure imgf000017_0007
deformation vectors ρϋ) and all the corresponding feature values are
Figure imgf000017_0010
Figure imgf000017_0008
collected.
For the simplicity of the following equations, the feature vector for a single image 9 can be stored in . Then the training matrices for the
Figure imgf000017_0006
Figure imgf000017_0009
individual regression matrices are determined as follows:
Figure imgf000017_0002
The determination of the training matrices for the next training of the regression matrix depends on the previously determined training matrices:
Figure imgf000017_0003
The following training matrices can then be determined as follows:
Figure imgf000017_0004
The learning of the individual regression matrices is an optimization problem solved by the least squares method:
/
Figure imgf000017_0005
The equation (16) can be modified accordingly:
Figure imgf000018_0001
Starting from the mean form Ym, points 19 for a region of interest 1 1 in an image 9 can then be determined taking the equations (1 ), (2) and (3) into account. This is illustrated by way of example in Fig. 9, where the points 19 for the region of interest 1 1 are determined from the reference points 15 of the mean shape by means of the curve adjustment. The points 19 result from the adaptation by means of the regression matrices
Figure imgf000018_0003
Figure imgf000018_0002
In this way, a regression-based method of machine learning can be provided, with which a function is provided which maps the reference points 1 5 directly to the boundary 20 of the region of interest 1 1 . This method can also be performed for several images provided with the camera 4. The method provides a polygon comprising the 2-dimensional points 1 9. These points 19 and the polygon respectively lie on the boundary 20 between the region of interest 1 1 and the region 10. Thus, the region of interest 1 1 can be reliably
determined.

Claims

Claims
1 . Method for determining a region of interest (1 1 ) in an image (9) which is captured by a camera (4) of a motor vehicle (1 ), in which the image (9) is received from the camera (4), by means of image points of the image (9), a boundary (20) of a region which images an environment (8) of the motor vehicle (1 ) is detected in the image (9), and the region of interest (1 1 ) is determined on the basis of the boundary, characterized in that
a plurality of reference points (15) which describe a reference boundary of a predetermined, mean region of interest is predetermined in the image, a plurality of displacement parameters are determined from the image points which respectively describe a displacement of the reference points (15) to the boundary (20), the displacement parameters are determined iteratively taking into account the previously determined displacement parameters, based on the displacement parameters, a parameter vector is determined and the region of interest (1 1 ) is determined by means of a displacement of the respective reference points (15) by the parameter vector.
2. Method according to claim 1 ,
characterized in that
a displacement vector which describes the displacement of the respective reference points (15) in a coordinate system of the image (9) is determined by means of the parameter vector, and the respective reference points (15) are displaced with the displacement vector for determining the region of interest (1 1 ).
3. Method according to claim 1 or 2,
characterized in that
a function is determined which determines a feature which describes the boundary (20) for a group of the image points, and the respective displacement parameters are determined by means of the function.
4. Method according to any one of the preceding claims,
characterized in that
in a training phase, a plurality of base shapes are determined for different regions of interest (1 1 ) and the mean region of interest is predetermined on the basis of the plurality of base shapes.
5. Method according to claim 4,
characterized in that
the base shapes are determined by different training images (9') from different types of motor vehicles (1 ) and/or cameras (4).
6. Method according to claim 5,
characterized in that
a plurality of auxiliary points (12) are determined at a boundary of the region of interest (1 1 ) for determining the respective base shape in the training image (9'), and a curve (13) is determined by means of the auxiliary points (12).
7. Method according to claim 6,
characterized in that
a plurality of auxiliary lines (16) is determined in the training image (9'), wherein the auxiliary lines (16) extend from a center point of the region of interest (1 1 ) to an edge of the training image (9'), and wherein adjacent auxiliary lines (16) each enclose the same angle, and on the basis of the respective intersections between the auxiliary lines (16) and the curve (13), base points (14, 14', 14") are determined for the base shape.
8. Method according to claim 7,
characterized in that
a plurality of intermediate points (16, 16', 16", 17, 17', 17", 18, 18', 18") are determined on the basis of the base points (14, 14', 14") of the respective base shapes and the reference points (15) and a reference parameter vector is determined on the basis of the differences between the respective intermediate points (16, 16', 16", 17, 17', 17", 18, 18', 18") and the reference points (15).
9. Method according to claim 8,
characterized in that
the reference parameter vector is determined on the basis of the respective differences by means of a principal component analysis.
10. Method according to claim 8 or 9,
characterized in that
regression matrices for the respective displacement parameters are determined based on the reference parameter vector.
1 1 . Control device (3) for a camera system (2) of a motor vehicle (1 ), which is adapted for performing a method according to any one of the preceding claims.
12. Camera system (2) for a motor vehicle (1 ) comprising a control device (3) according to claim 1 1 and at least one camera (4).
13. Motor vehicle (1 ) comprising a camera system (2) according to claim 12.
PCT/EP2018/077592 2017-10-11 2018-10-10 Method for determining a region of interest in an image captured by a camera of a motor vehicle, control system, camera system as well as motor vehicle WO2019072911A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017123582.5A DE102017123582A1 (en) 2017-10-11 2017-10-11 Method for determining a region of interest in an image taken by a camera of a motor vehicle, control device, camera system and motor vehicle
DE102017123582.5 2017-10-11

Publications (1)

Publication Number Publication Date
WO2019072911A1 true WO2019072911A1 (en) 2019-04-18

Family

ID=63857909

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/077592 WO2019072911A1 (en) 2017-10-11 2018-10-10 Method for determining a region of interest in an image captured by a camera of a motor vehicle, control system, camera system as well as motor vehicle

Country Status (2)

Country Link
DE (1) DE102017123582A1 (en)
WO (1) WO2019072911A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022119751A1 (en) 2022-08-05 2024-02-08 Connaught Electronics Ltd. Determining an area of interest from camera images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090128309A1 (en) * 2007-11-16 2009-05-21 Valeo Vision Method of detecting a visibility interference phenomenon for a vehicle
US20150206015A1 (en) * 2014-01-23 2015-07-23 Mitsubishi Electric Research Laboratories, Inc. Method for Estimating Free Space using a Camera System
US20170091565A1 (en) * 2015-09-30 2017-03-30 Denso Corporation Object detection apparatus, object detection method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090128309A1 (en) * 2007-11-16 2009-05-21 Valeo Vision Method of detecting a visibility interference phenomenon for a vehicle
US20150206015A1 (en) * 2014-01-23 2015-07-23 Mitsubishi Electric Research Laboratories, Inc. Method for Estimating Free Space using a Camera System
US20170091565A1 (en) * 2015-09-30 2017-03-30 Denso Corporation Object detection apparatus, object detection method, and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022119751A1 (en) 2022-08-05 2024-02-08 Connaught Electronics Ltd. Determining an area of interest from camera images
WO2024028242A1 (en) 2022-08-05 2024-02-08 Connaught Electronics Ltd. Determining a region of interest from camera images

Also Published As

Publication number Publication date
DE102017123582A1 (en) 2019-04-11

Similar Documents

Publication Publication Date Title
CN107577988B (en) Method, device, storage medium and program product for realizing side vehicle positioning
US9117122B2 (en) Apparatus and method for matching parking-lot outline
US9113049B2 (en) Apparatus and method of setting parking position based on AV image
Yu et al. Lane boundary detection using a multiresolution hough transform
CN106952308B (en) Method and system for determining position of moving object
EP2237988B1 (en) Object detection and recognition system
US9082020B2 (en) Apparatus and method for calculating and displaying the height of an object detected in an image on a display
JP4943034B2 (en) Stereo image processing device
DE102017120112A1 (en) DEPTH CARE VALUATION WITH STEREO IMAGES
EP2757527B1 (en) System and method for distorted camera image correction
CN111539484B (en) Method and device for training neural network
US20160314357A1 (en) Method and Device for Monitoring an External Dimension of a Vehicle
US20140112542A1 (en) System and method for recognizing parking space line markings for vehicle
CN103786644B (en) Apparatus and method for following the trail of peripheral vehicle location
CN109703465B (en) Control method and device for vehicle-mounted image sensor
DE102013226476B4 (en) IMAGE PROCESSING METHOD AND SYSTEM OF AN ALL-ROUND SURVEILLANCE SYSTEM
CN113205447A (en) Road picture marking method and device for lane line identification
CN113269163B (en) Stereo parking space detection method and device based on fisheye image
KR101694837B1 (en) Apparatus and Method of Detecting Vehicle Information in Image taken on Moving Vehicle
EP3690724A1 (en) Estimating passenger statuses in 2 dimensional images captured using a fisheye lens
EP3029602A1 (en) Method and apparatus for detecting a free driving space
WO2019072911A1 (en) Method for determining a region of interest in an image captured by a camera of a motor vehicle, control system, camera system as well as motor vehicle
CN110827337B (en) Method and device for determining posture of vehicle-mounted camera and electronic equipment
CN109923586B (en) Parking frame recognition device
Du et al. Validation of vehicle detection and distance measurement method using virtual vehicle approach

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18786273

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18786273

Country of ref document: EP

Kind code of ref document: A1