WO2024061697A1 - Displaying image data in a vehicle depending on sensor data - Google Patents

Displaying image data in a vehicle depending on sensor data Download PDF

Info

Publication number
WO2024061697A1
WO2024061697A1 PCT/EP2023/075076 EP2023075076W WO2024061697A1 WO 2024061697 A1 WO2024061697 A1 WO 2024061697A1 EP 2023075076 W EP2023075076 W EP 2023075076W WO 2024061697 A1 WO2024061697 A1 WO 2024061697A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
projection surface
image
depending
data
Prior art date
Application number
PCT/EP2023/075076
Other languages
English (en)
French (fr)
Inventor
Huanqing Guo
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2024061697A1 publication Critical patent/WO2024061697A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing

Definitions

  • the present invention is directed to a method for displaying image data in a vehicle, wherein at least one image depicting an environment of the vehicle is generated by a camera system of the vehicle and a predefined projection surface is provided.
  • the invention is further directed to a corresponding electronic vehicle guidance system and to a computer program product.
  • Camera systems of vehicles which may comprise one or more cameras arranged at different positions at the vehicle, for example, vehicle surround view systems, may be used for driver assistance functions or other functions for autonomous or semi- autonomous driving.
  • a camera image or more than one camera image from different cameras, which are stitched together into a combined view, may be projected onto a predefined projection surface that is a two-dimensional manifold in three-dimensional space, such as bowl-shape or the like.
  • the projected image or the projected images can be transformed according to respective viewing parameters of a virtual observer, also denoted as virtual camera, such that the images appear as if they would have been observed by the virtual observer or captured by the virtual camera, respectively.
  • the position and/or orientation of the virtual observer can, for example, be set or modified by a user or automatically by the vehicle.
  • Document DE 10 2015 105 529 A1 relates to a method for transforming an image, which represents an area surrounding a motor vehicle from the perspective of a virtual camera.
  • the image is represented by the transformation from a plurality of real images, which are generated by means of a plurality of real cameras of the motor vehicle.
  • Document US 2021/0125401 A1 relates to a method for representing an environmental region of a motor vehicle in an image, wherein real images of the environmental region are captured by real cameras of the motor vehicle and the image is generated from these real images.
  • the image is represented from a perspective of a virtual camera in the environmental region, and the image is generated as a bowl shape.
  • the invention is based on the idea to determine object data for one or more objects in the environment of the vehicle based on sensor data of an environmental sensor system, wherein the object data depends on a distance of the object from the vehicle.
  • the projection surface is adjusted depending on the object data.
  • a method for displaying image data in a vehicle is provided.
  • a predefined projection surface is provided, in particular in a computer-readable manner, for example to at least one computing unit of the vehicle.
  • Sensor data representing at least one object in the environment is generated by an environment sensor system of the vehicle.
  • respective object data which depends on a distance of the respective object from the vehicle, is determined depending on the sensor data, in particular by the at least computing unit.
  • the projection surface is adjusted by deforming the projection surface depending on the object data of the at least one object, in particular by the at least one computing unit.
  • the at least one image is projected onto the adjusted projection surface, in particular by the at least one computing unit.
  • Image data depending on the projected at least one image is displayed on a display device of the vehicle.
  • the at least one image being projected onto the adjusted projection surface may directly be displayed on the display device or may be further processed by the at least one computing unit and then displayed on the display device.
  • a camera can be understood as a monocular camera here and in the following.
  • the camera system comprises one or more cameras mounted to the vehicle.
  • each of the one or more cameras generates one of the at least one image, in particular exactly one.
  • the number of the at least one image may be identical to the number of cameras of the camera system.
  • each of the cameras may generate a video stream comprising a plurality of respective images corresponding to subsequent frames.
  • the different video streams may be synchronized amongst each other or a temporal relation between the individual frames of different video streams may be known.
  • a frame rate is identical for all of the cameras of the camera system. Consequently, for each frame period, exactly one image of each of the cameras is generated.
  • the images of the at least one image correspond to the same frame in this case. In other words, all images of the at least one image represent or depict different parts of the environment of the vehicle at the same time or approximately the same time.
  • the predefined projection surface and /or the adjusted projection surface may for example provided in terms of a respective mathematical description, look-up table or in another computer-readable manner.
  • an environmental sensor system can be understood as a sensor system, which is able to generate sensor data or sensor signals, which depict, represent or image an environment of the environmental sensor system.
  • sensor data or sensor signals which depict, represent or image an environment of the environmental sensor system.
  • the ability to capture or detect electromagnetic or other signals from the environment cannot be considered a sufficient condition for qualifying a sensor system as an environmental sensor system.
  • cameras, lidar systems, radar systems or ultrasonic sensor systems may be considered as environmental sensor systems.
  • the environmental sensor system may contain the camera system or partially contain the camera system or, in other words, contain one or more or all of the cameras of the camera system. In other implementations, the environmental sensor system does not comprise any of the cameras of the camera system.
  • the environmental sensor system may comprise a radar system, a lidar system and/or an ultrasonic sensor system.
  • the environmental sensor system may in some implementations also comprise one or more further cameras, which are not contained by the camera system.
  • the one or more further cameras may, in particular, contain one or more stereoscopic cameras.
  • the sensor data indicate the distance of the at least one object directly or indirectly.
  • the sensor data may contain information regarding a time-of-flight of respective radio signals, light signals or ultrasonic waves, respectively. This time-of-flight may be translated into a distance.
  • approaches may be employed for this kind of systems, for example approaches based on evaluating phase shifts of emitted and detected portions of waves.
  • the distance information may be encoded in the respective image data.
  • the distance of objects depicted in the respective images may be estimated by computer vision algorithms, for example, depth estimation algorithms.
  • computer vision algorithms for example, depth estimation algorithms.
  • Such algorithms which are, for example, trained by means of machine learning and may, for example, be based on artificial neural networks, are well-known.
  • the object data may, in some cases, directly contain the respective distance of the object. However, in other implementations, the object data may contain a distance range, wherein a distance of the object is estimated to lie within the distance range. This also includes implementations, wherein the object data comprises merely the information that an object is present within a predefined detection range of the environmental sensor system or a part of the environmental sensor system.
  • the environmental sensor system may in some cases include more than one individual sensor, for example, more than one lidar sensor, more than one ultrasonic transceiver or radar transceiver et cetera
  • the respective distance from the vehicle may be defined differently for different objects of the at least one object.
  • the distance may be defined with respect to the detecting sensor or with respect to a common reference point et cetera.
  • An object of the at least one object may also correspond to a cluster of two or more smaller objects, which are, for example, not resolvable individually by the environmental sensor system, be it due to the capabilities of the environmental sensor system or the arrangement of the individual objects.
  • clusters of individual objects can be treated equally to single objects.
  • processing steps may, for example, comprise filtering steps or processing steps for stitching the images of different cameras together, transforming the images according to the viewing position and/or orientation or other viewing parameters of a virtual observer et cetera.
  • the projection surface can, for example, be considered as a two-dimensional manifold in three-dimensional space.
  • the projection surface is not necessarily describable by means of such a function.
  • the projection surface may, in some implementations, also be defined in a piece-wise manner with different functions for different parts of the three-dimensional space.
  • the projection surface is, in particular, defined in a vehicle coordinate system, wherein the position and orientation of the vehicle is known, in particular fixed. For example, a specified reference point of the vehicle may be located in the origin of the coordinate system.
  • distortions in the displayed image data may be reduced because the apparent distance of the objects depicted in the displayed image data given by the distance of the adjusted projection surface from the vehicle, matches the actual distance of the object from the vehicle in the real world better.
  • the adjustment of the projection surface can be carried out such that the deviation of the distance of the adjusted projection surface from the vehicle and the actual distance in the real world between the object and the vehicle is reduced compared to the original predefined projection surface.
  • the projected at least one image is transformed according to predefined viewing parameters of a virtual observer, in particular by the at least one computing unit.
  • the image data comprises or depends on the transformed projected at least one image.
  • the viewing parameters may, for example, comprise a position of the virtual observer and/or an orientation of the virtual observer with respect to the projection surface or in the coordinate system used for defining the projection surface and the adjusted projection surface.
  • the viewing parameters may also include or define a field of view of the virtual observer, which is, in particular, also denoted as a vehicle camera.
  • the viewing parameter may also include virtual mapping parameters describing the mapping function of a virtual camera.
  • the virtual observer can be located at arbitrary positions in the environment of the vehicle. Depending on the position and/or further viewing parameters of the virtual observer, the size of the portion of the adjusted projection surface being displayed may vary. In particular, the image data displayed on the display device appear as if they would have been captured by the virtual camera or viewed by the virtual observer, respectively.
  • an ultrasonic sensor system and/or a lidar system and/or a radar system is used as the environmental sensor system.
  • the environmental sensor system comprises an ultrasonic sensor system and/or a lidar system and/or a radar system.
  • the environmental sensor system comprises a camera of the camera system or a further camera of the vehicle
  • the sensor data comprises at least one further image depicting the environment.
  • the at least one further image is generated, in particular, by the camera or further camera comprised by the environmental sensor system.
  • the further camera may, for example, be a stereoscopic camera.
  • the distance or distance range of the at least one object can be extracted from the corresponding stereoscopic further image.
  • the camera or further camera is a monocular camera.
  • the object data in particular information concerning the distance of the at least one object from the vehicle, may be obtained by using computer vision algorithms or image processing algorithms or image analysis algorithms that estimate the corresponding distances.
  • the object data comprises the distance of the respective object from the vehicle, wherein the distance is determined by using a depth estimation algorithm depending on the at least one further image, in particular by the at least one computing unit.
  • determining the distance may also involve using one or more additional algorithms, such as an object detection algorithm, a corner detection algorithm et cetera.
  • the depth estimation algorithm may be applied to the at least one further image or the at least one further image is preprocessed, and the depth estimation algorithm is applied to the at least one preprocessed further image.
  • a characteristic point in the at least one further image is determined using a corner detection algorithm and/or an object detection algorithm.
  • the distance is determined by computing the depth of the characteristic point by using the depth estimation algorithm.
  • the characteristic point corresponds to a corner in the at least one further image.
  • an object detection algorithm may be understood as a computer algorithm, which is able to identify and localize one or more objects within a provided input dataset, for example input image, by specifying respective bounding boxes or regions of interest, ROI, and, in particular, assigning a respective object class to each of the bounding boxes, wherein the object classes may be selected from a predefined set of object classes.
  • assigning an object class to a bounding box may be understood such that a corresponding confidence value or probability for the object identified within the bounding box being of the corresponding object class is provided.
  • the algorithm may provide such a confidence value or probability for each of the object classes for a given bounding box. Assigning the object class may for example include selecting or providing the object class with the largest confidence value or probability.
  • the algorithm can specify only the bounding boxes without assigning a corresponding object class.
  • the characteristic point may, for example, correspond to a corner of the respective bounding box or a center of the bounding box et cetera.
  • the projection surface is given by points fulfilling the equation
  • X, Y and Z denote Cartesian coordinates of the respective point on the projection surface
  • Z corresponds to a height above a predefined ground plane, on which the vehicle is located
  • n is an even integer equal to or greater than four
  • bi and c are predefined real coefficients.
  • Adjusting the projection surface comprises adjusting at least one of the coefficients ai and/or at least one of the coefficients bi.
  • the projection surface as well as the adjusted projection surface are given by respective polynomials of n-th degree in the two variables X, Y.
  • n 4.
  • the shape of the projection surface and the adjusted projection surface can be denoted as a bowl, which has a relatively flat portion in the area where the vehicle is located and rises relatively steeply farther away from the vehicle.
  • the vehicle is centered at the origin of the coordinate system and the X-Y-plane corresponds to the ground plane. This represents a suitable approximation to the distances of the depicted objects.
  • the approximation may be improved and the distortions may be reduced consequently.
  • Adjusting the projection surface may then comprise, for example, adjusting one or more of the parameters A, B, C and W, wherein the offsets X s and Y s remain fixed.
  • the projection surface comprises a base portion, which is given by a part of a predefined ground plane, on which the vehicle is located, wherein the vehicle is located within the base portion.
  • the projection surface comprises a raised portion that adjoins the base portion at an outer boundary of the base portion.
  • Adjusting the projection surface comprises adjusting the outer boundary, in particular a shape of the outer boundary, of the base portion depending on the object data of the at least one object.
  • the base portion is a convex base portion or, in other words, the base portion or the outer boundary is a convex geometric figure.
  • points on the raised portion have a non-zero height above the ground plane.
  • the adjustment of the projection surface may reflect the real circumstances in the environment of the vehicle in terms of the distances of the at least one object from the vehicle more precisely.
  • the base portion of the projection surface is given by a convex figure, wherein the outer boundary of the base portion is given by a plurality of anchor points and respective curves connecting pairs of neighboring anchor points of the plurality of anchor points.
  • Adjusting the projection surface comprises adjusting a number of the plurality of anchor points depending on the object data and/or adjusting a position of at least one of the plurality of anchor points depending on the object data of the at least one object.
  • the adjustment of the outer boundary is particularly straight forward.
  • the expression “curve” also includes the case of zero curvature or, in other words, the curves can also be straight lines, which means that the outer boundary has the shape of a polygon.
  • the projection surface is given by points fulfilling the equation Therein X, Y and Z denote Cartesian coordinates of the respective point on the projection surface, Z corresponds to a height above the predefined ground plane, on which the vehicle is located, n is an even integer equal to or greater than four, and as, bi and c are predefined real coefficients.
  • the adjusted projection surface comprises a base portion, which is given by a part of the ground plane, wherein the vehicle is located within the base portion, and a raised portion that adjoins the base portion at an outer boundary of the base portion, wherein a shape of the outer boundary of the base portion depends on the object data of the at least one object.
  • the base portion can be given by the convex figure, whose outer boundary is given by the anchor points and the respective curves connecting them, as described above.
  • the projection surface is adjusted by deforming the projection surface depending on the object data of the at least one object in a number of steps N including N-1 intermediate steps and one final step, wherein N is equal to or greater than two, within a single fame period of the camera system.
  • N is equal to or greater than two, within a single fame period of the camera system.
  • a respective intermediate projection surface is generated, the at least one image is projected onto the intermediate projection surface, and intermediate image data depending on the at least one image projected on the intermediate projection surface is displayed on the display device.
  • the adjustment of the projection surface is finalized and then the image data depending on the at least one image projected on the adjusted projection surface is displayed as described.
  • the number of steps N is equal to or greater than three and may be equal to or smaller than ten.
  • a refresh rate of the display device may be equal to or greater than N times a frame rate of the camera system.
  • the environment of the vehicle is divided into a plurality of detection zones and, depending on the sensor data, each of the detection zones is identified as an active detection zone, if one or more of the at least one object is located in the respective detection zone, and as a non-active detection zone otherwise, in particular by the at least one computing unit.
  • the projection surface is adjusted depending on the number of identified active detection zones.
  • the projection surface may be adjusted only if a sufficient change in the number of active detection zones is determined. In this case, it may be avoided that the appearance of the displayed image data is changed strongly in case the distortion in the displayed images is expected to be rather low.
  • an error message and/or a prompt for user feedback is output and/or a default setting and/or a predetermined initial state is set.
  • an electronic vehicle guidance system for a vehicle comprises a camera system for the vehicle, which is configured to generate at least one image depicting an environment of the vehicle, in particular when the camera system is mounted to the vehicle.
  • the electronic vehicle guidance system comprises an environmental sensor system for the vehicle, which is configured to generate sensor data representing at least one object in the environment of the vehicle, in particular when the environmental sensor system is mounted to the vehicle.
  • the electronic vehicle guidance system comprises at least one computing unit for the vehicle, which is configured to determine, for each of the at least one object, respective object data, which depends on a distance of the respective object from the vehicle, depending on the sensor data.
  • the at least one computing unit is configured to adjust the projection surface by deforming the projection surface depending on the object data of the at least one object and to project the at least one image onto the adjusted projection surface.
  • the electronic vehicle guidance system comprises a display device for the vehicle, wherein the at least one computing unit is configured to control the display device to display image data depending on the projected at least one image.
  • An electronic vehicle guidance system may be understood as an electronic system, configured to guide a vehicle in a fully automated or a fully autonomous manner and, in particular, without a manual intervention or control by a driver or user of the vehicle being necessary.
  • the vehicle carries out all required functions, such as steering maneuvers, deceleration maneuvers and/or acceleration maneuvers as well as monitoring and recording the road traffic and corresponding reactions automatically.
  • the electronic vehicle guidance system may implement a fully automatic or fully autonomous driving mode according to level 5 of the SAE J3016 classification.
  • An electronic vehicle guidance system may also be implemented as an advanced driver assistance system, ADAS, assisting a driver for partially automatic or partially autonomous driving.
  • the electronic vehicle guidance system may implement a partly automatic or partly autonomous driving mode according to levels 1 to 4 of the SAE J3016 classification.
  • SAE J3016 refers to the respective standard dated June 2018.
  • Guiding the vehicle at least in part automatically may therefore comprise guiding the vehicle according to a fully automatic or fully autonomous driving mode according to level 5 of the SAE J3016 classification. Guiding the vehicle at least in part automatically may also comprise guiding the vehicle according to a partly automatic or partly autonomous driving mode according to levels 1 to 4 of the SAE J3016 classification.
  • a computing unit may in particular be understood as a data processing device, which comprises processing circuitry.
  • the computing unit can therefore in particular process data to perform computing operations. This may also include operations to perform indexed accesses to a data structure, for example a look-up table, LUT.
  • the computing unit may include one or more computers, one or more microcontrollers, and/or one or more integrated circuits, for example, one or more application-specific integrated circuits, ASIC, one or more field-programmable gate arrays, FPGA, and/or one or more systems on a chip, SoC.
  • the computing unit may also include one or more processors, for example one or more microprocessors, one or more central processing units, CPU, one or more graphics processing units, GPU, and/or one or more signal processors, in particular one or more digital signal processors, DSP.
  • the computing unit may also include a physical or a virtual cluster of computers or other of said units.
  • the computing unit includes one or more hardware and/or software interfaces and/or one or more memory units.
  • a memory unit may be implemented as a volatile data memory, for example a dynamic random access memory, DRAM, or a static random access memory, SRAM, or as a non- volatile data memory, for example a read-only memory, ROM, a programmable read-only memory, PROM, an erasable programmable read-only memory, EPROM, an electrically erasable programmable read-only memory, EEPROM, a flash memory or flash EEPROM, a ferroelectric random access memory, FRAM, a magnetoresistive random access memory, MRAM, or a phase-change random access memory, PCRAM.
  • a volatile data memory for example a dynamic random access memory, DRAM, or a static random access memory, SRAM
  • a non- volatile data memory for example a read-only memory, ROM, a programmable read-only memory, PROM, an erasable programmable read-only memory, EPROM, an electrically erasable programmable read-only memory, EEPROM, a
  • the at least one computing unit is configured to adjust the projection surface by deforming the projection surface depending on the object data of the at least one object in a number of steps N including N-1 intermediate steps and one final step, wherein N is equal to or greater than two, within a single frame period of the camera system.
  • a refresh rate of the display device is equal to or greater than N times a frame rate of the camera system.
  • the at least one computing unit is configured to generate, for each of the intermediate steps, a respective intermediate projection surface, project the at least one image onto the intermediate projection surface and control the display device to display intermediate image data depending on the at least one image projected on the intermediate projection surface.
  • the electronic vehicle guidance system according to the invention is designed or programmed to carry out the method according to the invention.
  • the electronic vehicle guidance system according to the invention carries out the method according to the invention.
  • a computer program product comprising instructions is provided.
  • the instructions When the instructions are executed by an electronic vehicle guidance system according to the invention, in particular by the at least one control unit of the electronic vehicle guidance system, the instructions cause the electronic vehicle guidance system to carry out a method according to the invention.
  • a computer-readable storage medium is provided, which stores a computer program according to the invention.
  • the computer program and the computer-readable storage medium may be considered as respective computer program products comprising the instructions.
  • Fig. 1 shows schematically a vehicle with an exemplary implementation of an electronic vehicle guidance system according to the invention
  • Fig. 2 shows a flow diagram of an exemplary implementation of a method according to the invention
  • Fig. 3 shows a flow diagram of a part of a further exemplary implementation of a method according to the invention
  • Fig. 4 shows a flow diagram of a further exemplary implementation of a method according to the invention
  • Fig. 5 shows a base portion of a projection surface according to a further exemplary implementation of a method according to the invention
  • Fig. 6 shows a vehicle and detection zones of an environmental sensor system according to a further exemplary implementation of a method according to the invention.
  • Fig. 7 illustrates a use case of a further exemplary implementation of a method according to the invention.
  • Fig. 1 shows a vehicle 1 , which comprises an exemplary implementation of an electronic vehicle guidance system 2 for the vehicle 1 according to the invention.
  • the electronic vehicle guidance system 2 comprises a camera system 4 with at least one camera 4 arranged at the vehicle 1 .
  • more than one camera 4, in particular four cameras 4 are arranged at different positions of the vehicle 1 thereby forming a surround view camera system.
  • the electronic vehicle guidance system 2 further comprises an environmental sensor system 5 comprising at least one environmental sensor unit 5, for example, a further camera, a lidar system, an ultrasonic transceiver or a radar unit. Also combinations of different types of environmental sensor units 5 are possible.
  • the environmental sensor system 5 may also be identical to the camera system 4 or comprised by the camera system 4 or vice versa.
  • the camera system 4 is configured to generate at least one image depicting an environment of the vehicle 1 and the environmental sensor system 5 is configured to generate sensor data representing at least one object in the environment of the vehicle 1 .
  • the electronic vehicle guidance system 2 also comprises a computing unit 3 which may, for example, represent one or more computing units of the vehicle 1 .
  • the computing unit 3 is configured to determine, for each of the at least one object, respective object data, which depends on a distance of the respective object from the vehicle 1 , depending on the sensor data.
  • the computing unit 3 is configured to adjust the projection surface by deforming the projection surface depending on the object data of the at least one object and to project the at least one image onto the adjusted projection surface.
  • the electronic vehicle guidance system 2 further comprises a display device 6 for the vehicle 1 , and the computing unit 3 is configured to control the display device 6 to display image data depending on the projected at least one image.
  • the electronic vehicle guidance system 2 is able to carry out a method for displaying image data in the vehicle 1 according to the invention.
  • Fig. 2 shows schematically a flow diagram of a further exemplary implementation of a method according to the invention.
  • the vehicle 1 may start an operation, where the image data shall be displayed on the display device 6, for example an automatic or manual parking operation.
  • An example of a parking operation is shown schematically in Fig. 7, wherein the vehicle 1 driving backwards from position P1 to position P2, then forward to position P3 and then backwards to the final position P4 of a parking space 11 between two further vehicles 12.
  • the sensor data are generated by the environmental sensor system 5 to detect the at least one object, including for example the further vehicles 12.
  • the computing unit 3 may determine the respective object data. For example, only objects with a distance below a certain threshold may be considered.
  • Fig. 6 shows a plurality of sectors 10, also denoted as detection zones, for an exemplary implementation of the environmental sensor system 5 as ultrasonic transceivers arranged around the vehicle 1 .
  • Each sector 10 may for example extend approximately 2 m from the vehicle 1 .
  • the object data may then for example contain the information, in which sector 10 an object is located and what is its distance from the vehicle 1 .
  • step 230 it may for example determined, how many sectors 10 are active. If the number of active sectors 10 is less than a predefined minimum number, the method may continue again with step 210. Otherwise, an adjusted projection surface may be computed in step 240 depending on the object data, in particular the positions and/or distances of the objects. If step 230 is not present, step 220 is also followed by step 240.
  • Fig. 3 shows a flow diagram of steps that may be involved in step 240 in a further exemplary implementation.
  • the adjusted projection surface may be constructed to comprise a base portion 7, which is a part of the ground plane, and a raised portion (not shown) adjoining an outer boundary 8 of the base portion 7.
  • a set of anchor points 9 is determined based on the object data and the outer boundary 8 is formed by the anchor points 9 and respective curves connecting them. If necessary, the positions of the anchor points 9 are adjusted in step 320 in order to achieve a convex shape for the outer boundary 8.
  • the boundary 8 may be smoothed and the raised portion may be adapted to the outer boundary 8.
  • the images may be projected to the adjusted or partially adjusted projection surfaces and displayed on the display device 6 in step 250 of Fig. 2.
  • the adjustment of the projection surface may be divided into a plurality of steps to avoid discontinuities in the displayed image data. For example, 6 to 15 steps may be made to transfer the projection surface to the adjusted projection surface. Then, steps 210 to 250 are repeated until the parking operation is completed in step 260.
  • Fig. 4 shows as flow diagram for a further exemplary implementation of a method according to the invention, wherein the adjustment of the projection surface is divided into a plurality of steps as mentioned above.
  • a frame rate of the camera system 4 may be 30 fps and a frame capture and pre-processing may take about 33 ms.
  • the display device 6 may have a high refresh rate, for example in the range 120 Hz to 360 Hz. Assuming for example a frame rate of 30 fps and a refresh rate of 240 Hz, the required 6- 15 steps may be carried out during only 3 to 8 frames. Thus, artifacts in the displayed image data may be reduced.
  • step 400 the frame buffer may be read and in 410 an adjustment step is carried out as described.
  • step 420 the respective intermediate image data are displayed.
  • step 430 it is checked whether the frame buffer contains updated data. If this is the case, it is proceeded with step 400, otherwise with step 410.
  • distortions in image data displayed in a vehicle may be reduced.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
PCT/EP2023/075076 2022-09-20 2023-09-13 Displaying image data in a vehicle depending on sensor data WO2024061697A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022124085.1A DE102022124085A1 (de) 2022-09-20 2022-09-20 Darstellung von Bilddaten in einem Fahrzeug abhängig von Sensordaten
DE102022124085.1 2022-09-20

Publications (1)

Publication Number Publication Date
WO2024061697A1 true WO2024061697A1 (en) 2024-03-28

Family

ID=88068549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/075076 WO2024061697A1 (en) 2022-09-20 2023-09-13 Displaying image data in a vehicle depending on sensor data

Country Status (2)

Country Link
DE (1) DE102022124085A1 (de)
WO (1) WO2024061697A1 (de)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015105529A1 (de) 2015-04-10 2016-10-13 Connaught Electronics Ltd. Verfahren zum Transformieren eines Bildes einer virtuellen Kamera, Computerprogrammprodukt, Anzeigesystem und Kraftfahrzeug
US20210125401A1 (en) 2018-01-30 2021-04-29 Connaught Electronics Ltd. Method for representing an environmental region of a motor vehicle with virtual, elongated distance markers in an image, computer program product as well as display system
US20220203895A1 (en) * 2020-12-25 2022-06-30 Denso Corporation Image forming device and image forming method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014208664A1 (de) 2014-05-08 2015-11-12 Conti Temic Microelectronic Gmbh Verfahren und vorrichtung zum verzerrungsfreien anzeigen einer fahrzeugumgebung eines fahrzeuges
DE102014226448B4 (de) 2014-12-18 2021-11-04 Bayerische Motoren Werke Aktiengesellschaft Fahrzeug zum Errechnen einer Sicht einer Fahrzeugumgebung
DE102015206477A1 (de) 2015-04-10 2016-10-13 Robert Bosch Gmbh Verfahren zur Darstellung einer Fahrzeugumgebung eines Fahrzeuges
US10262466B2 (en) 2015-10-14 2019-04-16 Qualcomm Incorporated Systems and methods for adjusting a combined image visualization based on depth information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015105529A1 (de) 2015-04-10 2016-10-13 Connaught Electronics Ltd. Verfahren zum Transformieren eines Bildes einer virtuellen Kamera, Computerprogrammprodukt, Anzeigesystem und Kraftfahrzeug
US20210125401A1 (en) 2018-01-30 2021-04-29 Connaught Electronics Ltd. Method for representing an environmental region of a motor vehicle with virtual, elongated distance markers in an image, computer program product as well as display system
US20220203895A1 (en) * 2020-12-25 2022-06-30 Denso Corporation Image forming device and image forming method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
360° WRAP-AROUND VIDEO IMAGING TECHNOLOGY READY FOR INTEGRATION WITH FUJITSU GRAPHICS SOCS, 12 September 2022 (2022-09-12), Retrieved from the Internet <URL:https://www.fujitsu.com/us/imagesgig5/360_OmniView_AppNote.pdf>
KUMAR VARUN RAVI ET AL: "Surround-View Fisheye Camera Perception for Automated Driving: Overview, Survey & Challenges", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 26 May 2022 (2022-05-26), pages 3638 - 3659, XP093109519, Retrieved from the Internet <URL:https://arxiv.org/pdf/2205.13281v1.pdf> [retrieved on 20231206], DOI: 10.1109/TITS.2023.3235057 *
YEH YEN-TING ET AL: "Driver Assistance System Providing an Intuitive Perspective View of Vehicle Surrounding", 11 April 2015, 20150411, PAGE(S) 403 - 417, XP047647352 *

Also Published As

Publication number Publication date
DE102022124085A1 (de) 2024-03-21

Similar Documents

Publication Publication Date Title
US11417017B2 (en) Camera-only-localization in sparse 3D mapped environments
US20170203692A1 (en) Method and device for the distortion-free display of an area surrounding a vehicle
US20230110116A1 (en) Advanced driver assist system, method of calibrating the same, and method of detecting object in the same
CN106168988B (zh) 用于产生掩蔽规则以及用于掩蔽摄像机的图像信息的方法和设备
US20220044032A1 (en) Dynamic adjustment of augmented reality image
CN114170826B (zh) 自动驾驶控制方法和装置、电子设备和存储介质
US11673506B2 (en) Image system for a vehicle
US20160288711A1 (en) Distance and direction estimation of a target point from a vehicle using monocular video camera
KR102566583B1 (ko) 서라운드 뷰 이미지의 안정화를 위한 방법 및 장치
WO2021110497A1 (en) Estimating a three-dimensional position of an object
US20220319145A1 (en) Image processing device, image processing method, moving device, and storage medium
WO2024061697A1 (en) Displaying image data in a vehicle depending on sensor data
KR102071720B1 (ko) 차량용 레이다 목표 리스트와 비전 영상의 목표물 정합 방법
US20230294729A1 (en) Method and system for assessment of sensor performance
JP2021051348A (ja) 物体距離推定装置及び物体距離推定方法
CN110796084A (zh) 车道线识别方法、装置、设备和计算机可读存储介质
US20240037791A1 (en) Apparatus and method for generating depth map by using volumetric feature
CN115063772B (zh) 一种车辆编队后车检测方法、终端设备及存储介质
US20230191998A1 (en) Method for displaying a virtual view of the area surrounding a vehicle, computer program, control unit, and vehicle
WO2024111324A1 (ja) 表示制御装置
CN113537161B (zh) 一种障碍物的识别方法、系统及装置
US20230306748A1 (en) Sensor processing method, apparatus, computer program product, and automotive sensor system
EP3957548A1 (de) System und methode zur vermeidung von kollisionen basierend auf vision
CN117376547A (zh) 一种车载相机参数矫正方法及装置
KR20230110949A (ko) 차량, 전자 장치 및 그 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23771829

Country of ref document: EP

Kind code of ref document: A1