WO2017067495A1 - Method and apparatus for generating image of area under vehicle, and vehicle - Google Patents

Method and apparatus for generating image of area under vehicle, and vehicle Download PDF

Info

Publication number
WO2017067495A1
WO2017067495A1 PCT/CN2016/102825 CN2016102825W WO2017067495A1 WO 2017067495 A1 WO2017067495 A1 WO 2017067495A1 CN 2016102825 W CN2016102825 W CN 2016102825W WO 2017067495 A1 WO2017067495 A1 WO 2017067495A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
current state
panoramic image
wheel angle
image
Prior art date
Application number
PCT/CN2016/102825
Other languages
French (fr)
Inventor
Wei Xiong
Original Assignee
Byd Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Byd Company Limited filed Critical Byd Company Limited
Publication of WO2017067495A1 publication Critical patent/WO2017067495A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Definitions

  • the present disclosure relates to the field of vehicle technologies, and in particular, to a method and an apparatus for generating an image of an area under a vehicle, and a vehicle.
  • an objective of the present disclosure is to provide a method for generating an image of an area under a vehicle.
  • a range displayed through panoramic image stitching is extended, so that image information can also be displayed for an area under a vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
  • a second objective of the present disclosure is to provide an apparatus for generating an image of an area under a vehicle.
  • a third objective of the present disclosure is to provide a vehicle.
  • a method for generating an image of an area under a vehicle in embodiments according to a first aspect of the present disclosure includes: acquiring a speed and a steering wheel angle in a current state of the vehicle; acquiring a history panoramic picture in a previous state of the vehicle; obtaining a position mapping relationship between the history panoramic picture and a panoramic picture in the current state according to the speed and the steering wheel angle; and generating the image of the area under the vehicle in the panoramic picture in the current state of the vehicle according to the position mapping relationship and the history panoramic picture.
  • the speed and the steering wheel angle in the current state of the vehicle are acquired; the history panoramic image in the previous state of the vehicle is acquired; the position mapping relationship between the history panoramic image and the panoramic image in the current state is obtained according to the speed and the steering wheel angle; and the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated according to the position mapping relationship and the history panoramic image.
  • an apparatus for generating an image of an area under a vehicle in embodiments according to a second aspect of the present disclosure includes: a traveling information acquisition module, configured to acquire a speed and a steering wheel angle in a current state of the vehicle; a history information acquisition module, configured to acquire a history panoramic image in a previous state of the vehicle; a mapping relationship acquisition module, configured to obtain a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle; and a generation module, configured to generate the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image.
  • the speed and the steering wheel angle in the current state of the vehicle are acquired by the traveling information acquisition module; the history panoramic image in the previous state of the vehicle is acquired by the history information acquisition module; the position mapping relationship between the history panoramic image and the panoramic image in the current state is obtained by the mapping relationship acquisition module according to the speed and the steering wheel angle; and the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated by the generation module according to the position mapping relationship and the history panoramic image.
  • a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
  • the vehicle in this embodiment of the present disclosure is equipped with the apparatus for generating an image of an area under a vehicle, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
  • a vehicle in embodiments according to a third aspect of the present disclosure includes the apparatus for generating an image of an area under a vehicle in the embodiments according to the second aspect of the present disclosure.
  • an electronic device in embodiments according to a fourth aspect of the present disclosure includes: a shell; a processor; a memory; a circuit board; and a power supply circuit, in which the circuit board is located in a space formed by the shell, the processor and the memory are arranged on the circuit board; the power supply circuit is configured to supply power for each circuit or component in the mobile terminal; the memory is configured to store executable program codes; the processor is configured to execute a program corresponding to the executable program codes by reading the executable program codes stored in the memory so as to perform the method according to embodiments of the first aspect of the present disclosure.
  • a storage medium in embodiments according to a fifth aspect of the present disclosure has one or more modules stored therein, in which the one or more modules are caused to perform the method according to embodiments of the first aspect of the present disclosure.
  • an application program in embodiments according to a sixth aspect of the present disclosure is configured to perform the method according to embodiments of the first aspect of the present disclosure when executed.
  • FIG. 1 is a flowchart of a method for generating an image of an area under a vehicle according to an embodiment of the present disclosure
  • FIG. 2 is schematic diagrams of a current state B and a previous state A of a vehicle when the vehicle is moving according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of panoramic image displaying in the related art
  • FIG. 4 is a schematic diagram of panoramic image displaying according to an embodiment of the present disclosure.
  • FIG. 5 is a detailed diagram of movement states of a vehicle according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of panoramic image displaying for a specific scenario in related technologies
  • FIG. 7 is a schematic diagram of panoramic image displaying according to a specific embodiment of the present disclosure.
  • FIG. 8 is a schematic block diagram of an apparatus for generating an image of an area under a vehicle according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a method for generating an image of an area under a vehicle according to an embodiment of the present disclosure. As shown in FIG. 1, the method for generating an image of an area under a vehicle according to this embodiment of the present disclosure includes the followings.
  • a controller of a vehicle panoramic image system may acquire message information about the speed and the steering wheel angle of the vehicle from a CAN network in a vehicle body.
  • S2 a history panoramic image in a previous state of the vehicle is acquired.
  • S3 a position mapping relationship between the history panoramic image and a panoramic image in the current state is obtained according to the speed and the steering wheel angle in the current state.
  • the shaded area M represents a part of an area under the vehicle. This area cannot be captured by a camera, and theoretically, image data cannot be obtained.
  • the shaded area M represents an image of an area around a vehicle body, and the camera can acquire an image of this area.
  • the history panoramic image of the previous state of the vehicle may be used to pad the image for the area under the vehicle in the current state of the vehicle.
  • a position mapping relationship between the history panoramic image and the panoramic image in the current state needs to be acquired.
  • the position mapping relationship between panoramic images of the vehicle in different states may be calculated according to the speed and the steering wheel angle acquired from the CAN network in the vehicle body.
  • positions of points in the area under the vehicle in the current state that correspond to the image of the area around the vehicle body in the previous state can be acquired, and thereby the image of the area under the vehicle in the current state can be generated according to the image of the area around the vehicle body of the vehicle in the previous state.
  • the image of the area under the vehicle in the current state and the image of the area around the vehicle body in the current state are stitched to obtain the panoramic image in the current state for a user to view, and areas shown by the image are fuller, which greatly improves user experience.
  • obtaining a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle in the current state includes: obtaining a vehicle wheel angle in the current state according to the steering wheel angle in the current state; obtaining, according to the vehicle wheel angle and the speed, a central angle by which the vehicle turns from the previous state to the current state; creating a coordinate system in the current state of the vehicle according to the vehicle wheel angle; acquiring first coordinates of at least three vehicle wheel positions in the coordinate system in the current state of the vehicle, and obtaining second coordinates of the at least three vehicle wheel positions in the coordinate system in the previous state of the vehicle according to the central angle; and calculating the position mapping relationship according to the first coordinates and the second coordinates.
  • FIG. 3 for example, four cameras, C1, C2, C3, and C4, are installed around the vehicle.
  • a visible area for panoramic image stitching is the shaded area, and limited by areas captured by the cameras, an area under the vehicle is invisible.
  • FIG. 4 An effect intended to be realized by the method for generating an image of an area under a vehicle in this embodiment of the present disclosure is shown in FIG. 4, that is, an image for the area under the vehicle can also be displayed, so as to achieve the purpose of free blind area under the vehicle.
  • a movement locus of wheels is a circle.
  • a block including A, B, C, and D represents a previous state of the vehicle
  • a block including A', B', C', and D' represents a current state of the vehicle
  • A, B, C, and D, and A', B', C', and D' respectively represent four wheels in the two states.
  • AB represents a wheel tread
  • AC represents a wheel base.
  • a vector V represents speed information collected from the CAN network in the vehicle body
  • a vector tangent V L of a circle passing through C represents a vector direction in which a left front wheel is moving
  • an angle ⁇ formed between the vector tangent V L and the vehicle body represents an angle by which the left front wheel turns (which is obtained through calculation according to steering wheel angle information from the CAN network in the vehicle body)
  • An angle ⁇ represents a radian of the entire vehicle body relative to an origin O when the vehicle moves from the previous state to the current state.
  • the vehicle moves in a circular motion with the center O as an origin.
  • a position of the center O constantly changes with a vehicle wheel angle.
  • a manner of determining the center O is as follows: if the vehicle turns left, as shown in FIG.
  • a circular coordinate system is created with a point intersected perpendicularly by speed directions (arc tangents) of the left front wheel (the point C) and the left back wheel (the point A) , and if the vehicle turns right, the center is on the right of the vehicle (that is, a horizontal mirror is made for FIG. 5) .
  • the vehicle wheel angle ⁇ is calculated according to formulas of
  • ⁇ r -0.21765 -0.05796 ⁇ + 9.62064*10 -6 ⁇ 2 -1.63785*10 -8 ⁇ 3 (1) , and
  • ⁇ r is a vehicle wheel angle of a right wheel relative to a vehicle body when the vehicle turns right
  • ⁇ l is a vehicle wheel angle of a left wheel relative to the vehicle body when the vehicle turns left
  • is the steering wheel angle.
  • the left back wheel has a minimum turning radius, where the minimum turning radius R min may be calculated according to a formula of
  • R min AC*cot ⁇ (3)
  • a rectangular coordinate system is created with the center O as an origin, a direction of R min as an X axis, a line that passes through the point O and is upward perpendicular to the X axis as a Y axis.
  • point coordinates positions of the points A, B, and C in the XY coordinate system are (R min , 0) , (R min +AB, 0) , and (R min , AC) respectively.
  • the radius R mid corresponding to the movement locus of the middle point between the front wheels of the vehicle is calculated by a formula of
  • AC is the wheel base of the vehicle
  • AB is the wheel tread of the vehicle
  • R min is the minimum turning radius of the vehicle.
  • R mid is the radius corresponding to the movement locus of the middle point between the front wheels of the vehicle
  • V is the speed of the vehicle
  • T is the period of time taken by the vehicle from the previous state to the current state.
  • an X'Y' rectangular coordinate system is created with OA' as an X axis, and a direction that is upward perpendicular to OA' as a Y' axis. It can be known that, coordinates of the points A', B', and C' in the X'Y' rectangular coordinate system are A' (R min , 0) , B' (R min +A'B', 0) , and C' (R min , A'C') respectively.
  • a perpendicular line of OA' is drawn through the point A, and it can be known that a position of the point A in the X'Y' coordinate system is A (R min *cos ⁇ , -R min *sin ⁇ ) .
  • coordinates of B and C in the X'Y' coordinate system may be obtained according to the coordinates of the point A and the central angle ⁇ (that is, ⁇ ) by which the vehicle turns, as the following:
  • B (A. x + AB*cos ⁇ , A. y -AB*sin ⁇ )
  • C (A. x + AC*sin ⁇ , A. y + AC*cos ⁇ )
  • A. x R min *cos ⁇
  • A. y -R min *sin ⁇ .
  • the position mapping relationship is calculated according to the first coordinates and the second coordinates in an affine transformation manner, a perspective transformation manner, a four-point bilinear interpolation manner, or the like.
  • An affine transformation manner is used as an example for description below.
  • affine transformation relational expression six coefficients in the affine transformation relational expression may be obtained, where the affine transformation relational expression is as follows:
  • y’ a 2 *x + b 2 *y + c 2 (7) .
  • Values of a 1 , b 1 , c 1 , a 2 , b 2 , and c 2 may be obtained by substituting the foregoing coordinates of the three pairs of points into the formulas (6) and (7) . In this way, the position mapping relationship between the history panoramic image in the previous state and the panoramic image in the current state of the vehicle is obtained.
  • generating the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image includes: calculating, according to the position mapping relationship, positions of all points in the area under the vehicle in the current state that correspond to the previous state of the vehicle; and generating the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to a history panoramic image of the positions that correspond to the previous state of the vehicle.
  • the affine transformation relational expression is still used as an example. After the six coefficients in the affine transformation relational expression are obtained, affine transformation is performed on all the points in the area under the vehicle according to the expressions shown in (6) and (7) , and coordinates of points in a history state (that is, the previous state) that correspond to all the points in the current state are obtained. Then, the points in the history state (that is, the previous state) that correspond to all the points in the current state are used to pad the points in the area in the current state of the vehicle, so as to complete a process of re-stitching and displaying.
  • a vehicle in a panoramic image shown in the vehicle is an opaque logo icon, and information on an area under the vehicle cannot be obtained, which is, for example, as shown in FIG. 6.
  • opacity of the logo icon for the vehicle may be changed to show information of an image of the area under the vehicle, so as to achieve the purpose of displaying a blind area under the vehicle body. For example, a displaying effect is shown in FIG. 7.
  • the shaded area M in the B state may be padded by the image of the area around the vehicle body in the A state. As the vehicle continues moving, the image of the area under the vehicle is gradually padded to be complete.
  • the speed and the steering wheel angle in the current state of the vehicle are acquired; the history panoramic image in the previous state of the vehicle is acquired; the position mapping relationship between the history panoramic image and the panoramic image in the current state is obtained according to the speed and the steering wheel angle; and the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated according to the position mapping relationship and the history panoramic image.
  • the present disclosure further provides an apparatus for generating an image of an area under a vehicle.
  • FIG. 8 is a schematic block diagram of an apparatus for generating an image of an area under a vehicle according to an embodiment of the present disclosure.
  • the apparatus for generating an image of an area under a vehicle in this embodiment of the present disclosure includes a traveling information acquisition module 10, a history information acquisition module 20, a mapping relationship acquisition module 30, and a generation module 40.
  • the traveling information acquisition module 10 is configured to acquire a speed and a steering wheel angle in a current state of the vehicle.
  • the traveling information acquisition module 10 may acquire message information about the speed and the steering wheel angle of the vehicle from a CAN network in a vehicle body.
  • the history information acquisition module 20 is configured to acquire a history panoramic image in a previous state of the vehicle.
  • the mapping relationship acquisition module 30 is configured to obtain a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle.
  • the shaded area M represents a part of an area under the vehicle. This area cannot be captured by a camera, and theoretically, image data cannot be obtained.
  • the shaded area M represents an image of an area around a vehicle body, and a camera can acquire an image of this area.
  • the history panoramic image of the previous state of the vehicle may be used to pad the image for the area under the vehicle in the current state of the vehicle.
  • a position mapping relationship between the history panoramic image and the panoramic image in the current state needs to be acquired.
  • the mapping relationship acquisition module 30 may calculate the position mapping relationship between panoramic images of the vehicle in different states according to the speed and the steering wheel angle acquired from the CAN network in the vehicle body.
  • the generation module 40 is configured to generate the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image in the previous state of the vehicle.
  • positions of points in the area under the vehicle in the current state that correspond to the image of the area around the vehicle body in the previous state can be acquired, and thereby the image of the area under the vehicle in the current state can be generated according to the image of the area around the vehicle body of the vehicle in the previous state.
  • the image of the area under the vehicle in the current state and the image of the area around the vehicle body in the current state are stitched to obtain the panoramic image in the current state for a user to view, and areas shown by the image are fuller, which greatly improves user experience.
  • the mapping relationship acquisition module 30 is configured to: calculate a vehicle wheel angle according to the steering wheel angle; acquire, according to the vehicle wheel angle, a radius corresponding to a movement locus of a middle point between front wheels of the vehicle; calculate, according to the radius and the speed, a central angle by which the vehicle turns from the previous state to the current state; create a coordinate system in the current state of the vehicle; acquire first coordinates of at least three vehicle wheel positions in the coordinate system in the current state of the vehicle, and acquire second coordinates of the at least three vehicle wheel positions in the coordinate system in the previous state of the vehicle according to the central angle; and calculate the position mapping relationship according to the first coordinates and the second coordinates.
  • the mapping relationship acquisition module 30 is configured to: acquire a minimum turning radius of the vehicle according to the vehicle wheel angle; and acquire, according to the minimum turning radius of the vehicle, a radius corresponding to a movement locus of a middle point between front wheels of the vehicle.
  • mapping relationship acquisition module 30 A process of obtaining the position mapping relationship by the mapping relationship acquisition module 30 is described in detail below.
  • FIG. 3 for example, four cameras, C1, C2, C3, and C4, are installed around the vehicle.
  • a visible area for panoramic image stitching is the shaded area, and limited by areas captured by the cameras, an area under the vehicle is not visible.
  • FIG. 4 An effect intended to be realized by the apparatus for generating an image of an area under a vehicle in this embodiment of the present disclosure is shown in FIG. 4, that is, an image for the area under the vehicle can also be displayed, so as to achieve the purpose of free blind area under the vehicle.
  • the mapping relationship acquisition module calculates the position mapping relationship between the history panoramic image and the panoramic image in the current state.
  • Specific implementation are as follows (a special case is replaced by a general case, that is, a turning case is discussed herein) .
  • a movement locus of wheels is a circle.
  • a block including A, B, C, and D represents a previous state of the vehicle
  • a block including A', B', C', and D' represents a current state of the vehicle
  • A, B, C, and D, and A', B', C', and D' respectively represent four wheels in the two states.
  • AB represents a wheel tread
  • AC represents a wheel base.
  • a vector V represents speed information collected from the CAN network in the vehicle body
  • a vector tangent V L of a circle passing through C represents a vector direction in which a left front wheel is moving
  • an angle ⁇ formed between the vector tangent V L and the vehicle body represents an angle by which the left front wheel turns (which is obtained through calculation according to steering wheel angle information from the CAN network in the vehicle body)
  • An angle ⁇ represents a radian of the entire vehicle body relative to an origin O when the vehicle moves from the previous state to the current state.
  • the vehicle moves in a circular motion with the center O as an origin.
  • a position of the center O constantly changes with a vehicle wheel angle.
  • a manner of determining the center O is as follows: if the vehicle turns left, as shown in FIG.
  • a circular coordinate system is created with a point intersected perpendicularly by speed directions (arc tangents) of the left front wheel (the point C) and the left back wheel (the point A) , and if the vehicle turns right, the center is on the right of the vehicle (that is, a horizontal mirror is made for FIG. 5) .
  • the mapping relationship acquisition module 30 calculates the vehicle wheel angle ⁇ according to formula (1) or (2) .
  • the left back wheel has a minimum turning radius, where the minimum turning radius R min may be calculated according to formula (3) .
  • a rectangular coordinate system is created with the center O as an origin, a direction of R min as an X axis, a line that passes through the point O and is upward perpendicular to the X axis as a Y axis.
  • point coordinates positions of the points A, B, and C in the XY coordinate system are (R min , 0) , (R min +AB, 0) , and (R min , AC) respectively.
  • the mapping relationship acquisition module 30 calculates the radius R mid corresponding to the movement locus of the middle point between the front wheels of the vehicle according to formula (4) .
  • a video processing speed of the panoramic image system of the vehicle reaches a real-time state, that is, 30fps, so an interval between frames is 33 millisecond, which is denoted as T.
  • T a real-time state
  • an interval between frames is 33 millisecond, which is denoted as T.
  • an arc length by which E moves in the V direction is V*T.
  • a central angle ⁇ by which E turns is as shown in formula (5) .
  • an X'Y' rectangular coordinate system is created with OA' as an X axis, and a direction that is upward perpendicular to OA' as a Y' axis. It can be known that, coordinates of the points A', B', and C' in the X'Y' rectangular coordinate system are A' (R min , 0) , B' (R min +A'B', 0) , and C' (R min , A'C') respectively.
  • a perpendicular line of OA' is drawn through the point A, and it can be known that a position of the point A in the X'Y' coordinate system is A (R min *cos ⁇ , -R min *sin ⁇ ) .
  • coordinates of B and C in the X'Y' coordinate system may be obtained according to the coordinates of the point A and the central angle ⁇ (that is, ⁇ ) by which the vehicle turns, as the following:
  • B (A. x + AB*cos ⁇ , A. y -AB*sin ⁇ )
  • C (A. x + AC*sin ⁇ , A. y + AC*cos ⁇ )
  • A. x R min *cos ⁇
  • A. y -R min *sin ⁇ .
  • the mapping relationship acquisition module 30 calculates the position mapping relationship according to the first coordinates and the second coordinates in an affine transformation manner, a perspective transformation manner, or a four-point bilinear interpolation manner.
  • An affine transformation manner is used as an example for description below.
  • the coordinates of the three points, A, B, and C, in the X'Y'coordinate system in the previous state of the vehicle and the corresponding coordinates of A', B', and C' in the current state are known, by using an affine transformation relational expression, six coefficients in the affine transformation relational expression may be obtained, where the affine transformation relational expression is as shown in formulas (6) and (7) .
  • Values of a 1 , b 1 , c 1 , a 2 , b 2 , and c 2 may be obtained by substituting the foregoing coordinates of the three pairs of points into the formulas (6) and (7) . In this way, the position mapping relationship between the history panoramic image in the previous state and the panoramic image in the current state of the vehicle is obtained.
  • the generation module 40 is configured to: calculate, according to the position mapping relationship, positions of all points in the area under the vehicle in the current state that correspond to the previous state of the vehicle; and generate the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to a history panoramic image of the positions that correspond to the previous state of the vehicle.
  • the affine transformation relational expression is still used as an example.
  • the generation module 40 performs affine transformation on all the points in the area under the vehicle according to the expressions shown in (6) and (7) , and obtains coordinates of points in a history state (that is, the previous state) that correspond to all the points in the current state. Then, the generation module 40 uses the points in the history state (that is, the previous state) that correspond to all the points in the current state to pad the points in the area in the current state of the vehicle, so as to complete a process of re-stitching and displaying.
  • a vehicle in a panoramic image shown in the vehicle is an opaque logo icon, and information on an area under the vehicle cannot be obtained, which is, for example, as shown in FIG. 6.
  • opacity of the logo icon for the vehicle may be changed to show information of an image of the area under the vehicle, so as to achieve the purpose of displaying a blind area under the vehicle body. For example, a displaying effect is shown in FIG. 7.
  • the shaded area M in the B state may be padded by the image of the area around the vehicle body in the A state. As the vehicle continues moving, the image of the area under the vehicle is gradually padded to be complete.
  • the speed and the steering wheel angle in the current state of the vehicle are acquired by the traveling information acquisition module; the history panoramic image in the previous state of the vehicle is acquired by the history information acquisition module; the position mapping relationship between the history panoramic image and the panoramic image in the current state is obtained by the mapping relationship acquisition module according to the speed and the steering wheel angle; and the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated by the generation module according to the position mapping relationship and the history panoramic image.
  • a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
  • the present disclosure further provides a vehicle.
  • the vehicle includes the apparatus for generating an image of an area under a vehicle in the embodiments of the present disclosure.
  • the vehicle in this embodiment of the present disclosure is equipped with the apparatus for generating an image of an area under a vehicle, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
  • orientation or position relationships indicated by terms “center” , “longitudinal” , “lateral” , “length” , “width” , “thickness” , “upper” , “lower” , “front” , “back” , “left” , “right” , “vertical” , “horizontal” , “top” , “bottom” , “internal” , “external” , “clockwise” , “anticlockwise” , “axial” , “radial” , “circumferential” , and the like are orientation or position relationships shown in the accompanying drawings, and are for purpose of convenient and simplified description of the present disclosure, rather than for indicating or implying that indicated apparatuses or elements need to be in a particular orientation, or configured and operated in a particular orientation, and therefore should not be understood as limitation to the present disclosure.
  • first and second are merely for purpose of description, and should not be understood as indicating or implying relative importance or implicitly specifying a quantity of indicated technical features. Therefore, features limited by “first” and “second” may explicitly or implicitly include at least one feature. In the description of the present disclosure, unless explicitly or specifically specified otherwise, meaning of "multiple” is at least two, for example, two, three, or the like.
  • a connection may be a fixed connection, or may be a detachable connection, or may be integrated; the connection may be a mechanical connection, or may be an electrical connection; the connection may be a direct connection, or may be an indirect connection through an intermediate medium; and the connection may be an internal connection between two elements or an interactional relationship between two elements, unless explicitly specified otherwise.
  • a person of ordinary skill in the art can understand specific meanings of the foregoing terms in the present disclosure according to specific situations.
  • a first feature is “above” or “below” a second feature may indicate that the first feature and the second feature contact directly, or that the first feature and the second feature contact through an intermediate medium.
  • a first feature is “above” , “over” , or “on” a second feature may indicate that the first feature is right above or slantways above the second feature, or merely indicate that the first feature is higher than the second feature.
  • a first feature is "below” or “under” a second feature may indicate that the first feature is right below or slantways below the second feature, or merely indicate that the first feature is lower than the second feature.
  • references terms "an embodiment” , “some embodiments” , “example” , “specific example” , “some examples” , and the like mean that specific characteristics, structures, materials, or “features” described with reference to the embodiment or example are included in at least one embodiment or example of the present disclosure.
  • referring expressions for the foregoing terms do not necessarily mean a same embodiment or example.
  • the described specific characteristics, structures, materials, or “features” may be combined in an appropriate manner in any one embodiment or multiple embodiments.
  • a person skilled in the art may join or combine different embodiments or examples or characteristics of different embodiments or examples described in this specification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

A method and an apparatus for generating an image of an area under a vehicle, and a vehicle are disclosed. The method includes: acquiring a speed and a steering wheel angle in a current state of the vehicle; acquiring a history panoramic image in a previous state of the vehicle; obtaining a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle; and generating the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image. The method improves safety during driving, enriches panoramic displaying functions, and improves user experience.

Description

METHOD AND APPARATUS FOR GENERATING IMAGE OF AREA UNDER VEHICLE, AND VEHICLE
CROSS-REFERENCE TO RELATED APPLICATION
This application is based upon and claims a priority to Chinese Patent Application Serial No. 201510690780.1, filed with the State Intellectual Property Office of P. R. China on October 22, 2015, the entire contents of which are incorporated herein by reference.
FIELD
The present disclosure relates to the field of vehicle technologies, and in particular, to a method and an apparatus for generating an image of an area under a vehicle, and a vehicle.
BACKGROUND
With rapid development of electronic product technologies, users have increasingly high requirements for experience of electronic products. For example, for a conventional panoramic image displaying system of a vehicle, only a visible range captured by cameras around the vehicle body can be displayed by image stitching, and requirements cannot be satisfied when the user intends to learn information within a displayed area more fully.
For example, when the vehicle is traveling, because cameras cannot capture an area under the vehicle body, a real time image cannot be generated for the user to view. Therefore, environmental information of the area under the vehicle body cannot be obtained, which leads to poor user experience.
SUMMARY
The present disclosure seeks to resolve at least one of the technical problems in the related art to at least some extent. In view of this, an objective of the present disclosure is to provide a method for generating an image of an area under a vehicle. With the method, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for an area under a vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
A second objective of the present disclosure is to provide an apparatus for generating an  image of an area under a vehicle.
A third objective of the present disclosure is to provide a vehicle.
To achieve the foregoing objectives, a method for generating an image of an area under a vehicle in embodiments according to a first aspect of the present disclosure includes: acquiring a speed and a steering wheel angle in a current state of the vehicle; acquiring a history panoramic picture in a previous state of the vehicle; obtaining a position mapping relationship between the history panoramic picture and a panoramic picture in the current state according to the speed and the steering wheel angle; and generating the image of the area under the vehicle in the panoramic picture in the current state of the vehicle according to the position mapping relationship and the history panoramic picture.
According to the method for generating an image of an area under a vehicle in the embodiments of the present disclosure, the speed and the steering wheel angle in the current state of the vehicle are acquired; the history panoramic image in the previous state of the vehicle is acquired; the position mapping relationship between the history panoramic image and the panoramic image in the current state is obtained according to the speed and the steering wheel angle; and the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated according to the position mapping relationship and the history panoramic image. By the method, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
To achieve the foregoing objectives, an apparatus for generating an image of an area under a vehicle in embodiments according to a second aspect of the present disclosure includes: a traveling information acquisition module, configured to acquire a speed and a steering wheel angle in a current state of the vehicle; a history information acquisition module, configured to acquire a history panoramic image in a previous state of the vehicle; a mapping relationship acquisition module, configured to obtain a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle; and a generation module, configured to generate the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image.
According to the apparatus for generating an image of an area under a vehicle in the embodiments of the present disclosure, the speed and the steering wheel angle in the current state of the vehicle are acquired by the traveling information acquisition module; the history panoramic image in the previous state of the vehicle is acquired by the history information acquisition module; the position mapping relationship between the history panoramic image and the panoramic image in the current state is obtained by the mapping relationship acquisition module according to the speed and the steering wheel angle; and the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated by the generation module according to the position mapping relationship and the history panoramic image. By the apparatus, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
Because the vehicle in this embodiment of the present disclosure is equipped with the apparatus for generating an image of an area under a vehicle, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
To achieve the foregoing objectives, a vehicle in embodiments according to a third aspect of the present disclosure includes the apparatus for generating an image of an area under a vehicle in the embodiments according to the second aspect of the present disclosure.
To achieve the foregoing objectives, an electronic device in embodiments according to a fourth aspect of the present disclosure includes: a shell; a processor; a memory; a circuit board; and a power supply circuit, in which the circuit board is located in a space formed by the shell, the processor and the memory are arranged on the circuit board; the power supply circuit is configured to supply power for each circuit or component in the mobile terminal; the memory is configured to store executable program codes; the processor is configured to execute a program corresponding to the executable program codes by reading the executable program codes stored in the memory so as to perform the method according to embodiments of the first aspect of the present disclosure.
To achieve the foregoing objectives, a storage medium in embodiments according to a fifth aspect of the present disclosure has one or more modules stored therein, in which the one or more modules are caused to perform the method according to embodiments of the first aspect of the  present disclosure.
To achieve the foregoing objectives, an application program in embodiments according to a sixth aspect of the present disclosure is configured to perform the method according to embodiments of the first aspect of the present disclosure when executed.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a flowchart of a method for generating an image of an area under a vehicle according to an embodiment of the present disclosure;
FIG. 2 is schematic diagrams of a current state B and a previous state A of a vehicle when the vehicle is moving according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of panoramic image displaying in the related art;
FIG. 4 is a schematic diagram of panoramic image displaying according to an embodiment of the present disclosure;
FIG. 5 is a detailed diagram of movement states of a vehicle according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of panoramic image displaying for a specific scenario in related technologies;
FIG. 7 is a schematic diagram of panoramic image displaying according to a specific embodiment of the present disclosure; and
FIG. 8 is a schematic block diagram of an apparatus for generating an image of an area under a vehicle according to an embodiment of the present disclosure.
DETAILED DESCRIPTION
Embodiments of the present disclosure are described in detail below. Examples of the embodiments are shown in the accompanying drawings, where same or similar reference numbers always represent same or similar elements or elements having same or similar functions. Embodiments described below with reference to the accompanying drawings are exemplary, and are intended to explain the present disclosure, rather than to limit the present disclosure.
FIG. 1 is a flowchart of a method for generating an image of an area under a vehicle according to an embodiment of the present disclosure. As shown in FIG. 1, the method for generating an image of an area under a vehicle according to this embodiment of the present  disclosure includes the followings.
S1: a speed and a steering wheel angle in a current state of the vehicle are acquired.
In at least one embodiment of the present disclosure, a controller of a vehicle panoramic image system may acquire message information about the speed and the steering wheel angle of the vehicle from a CAN network in a vehicle body.
S2: a history panoramic image in a previous state of the vehicle is acquired.
S3: a position mapping relationship between the history panoramic image and a panoramic image in the current state is obtained according to the speed and the steering wheel angle in the current state.
For example, as shown in FIG. 2, when the vehicle moves from an A state to a B state, there are two meanings for an area indicated by a shaded area M.
(1) For the current state B of the vehicle, the shaded area M represents a part of an area under the vehicle. This area cannot be captured by a camera, and theoretically, image data cannot be obtained.
(2) For the previous state (that is, a history state) A of the vehicle, the shaded area M represents an image of an area around a vehicle body, and the camera can acquire an image of this area.
It can be known based on the foregoing analysis that, for the vehicle, the history panoramic image of the previous state of the vehicle may be used to pad the image for the area under the vehicle in the current state of the vehicle.
In an embodiment, to ensure that the history panoramic image in the previous state of the vehicle can be accurately padded into the image of the area under the vehicle in the current state, a position mapping relationship between the history panoramic image and the panoramic image in the current state needs to be acquired.
In another embodiment, the position mapping relationship between panoramic images of the vehicle in different states may be calculated according to the speed and the steering wheel angle acquired from the CAN network in the vehicle body.
S4: the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated according to the position mapping relationship and the history panoramic image in the previous state of the vehicle.
In an embodiment, after the position mapping relationship between the panoramic images of  the vehicle in the previous state and the current state is acquired, positions of points in the area under the vehicle in the current state that correspond to the image of the area around the vehicle body in the previous state can be acquired, and thereby the image of the area under the vehicle in the current state can be generated according to the image of the area around the vehicle body of the vehicle in the previous state.
Furthermore, the image of the area under the vehicle in the current state and the image of the area around the vehicle body in the current state are stitched to obtain the panoramic image in the current state for a user to view, and areas shown by the image are fuller, which greatly improves user experience.
In an embodiment of the present disclosure, obtaining a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle in the current state includes: obtaining a vehicle wheel angle in the current state according to the steering wheel angle in the current state; obtaining, according to the vehicle wheel angle and the speed, a central angle by which the vehicle turns from the previous state to the current state; creating a coordinate system in the current state of the vehicle according to the vehicle wheel angle; acquiring first coordinates of at least three vehicle wheel positions in the coordinate system in the current state of the vehicle, and obtaining second coordinates of the at least three vehicle wheel positions in the coordinate system in the previous state of the vehicle according to the central angle; and calculating the position mapping relationship according to the first coordinates and the second coordinates.
A process of obtaining the position mapping relationship in S3 is described in detail below.
In the related art, as shown in FIG. 3, for example, four cameras, C1, C2, C3, and C4, are installed around the vehicle. Normally, a visible area for panoramic image stitching is the shaded area, and limited by areas captured by the cameras, an area under the vehicle is invisible. An effect intended to be realized by the method for generating an image of an area under a vehicle in this embodiment of the present disclosure is shown in FIG. 4, that is, an image for the area under the vehicle can also be displayed, so as to achieve the purpose of free blind area under the vehicle.
To ensure that the history panoramic image in the previous state of the vehicle can be accurately padded into the image of the area under the vehicle in the current state, current speed information and steering wheel angle information of the vehicle need to be collected from the CAN network in the vehicle body. By using these two pieces of information, the position mapping  relationship between the history panoramic image and the panoramic image in the current state is calculated. Specific implementation are as follows (a special case is replaced by a general case, that is, a turning case is discussed herein) .
Premises: It has been proved that, when a vehicle is turning, at a certain moment, a movement locus of wheels is a circle. As shown in FIG. 5, a block including A, B, C, and D represents a previous state of the vehicle, and a block including A', B', C', and D' represents a current state of the vehicle, where A, B, C, and D, and A', B', C', and D' respectively represent four wheels in the two states. AB represents a wheel tread, and AC represents a wheel base. A vector V represents speed information collected from the CAN network in the vehicle body, and a vector tangent VL of a circle passing through C represents a vector direction in which a left front wheel is moving, and an angle α formed between the vector tangent VL and the vehicle body represents an angle by which the left front wheel turns (which is obtained through calculation according to steering wheel angle information from the CAN network in the vehicle body) . An angle θ represents a radian of the entire vehicle body relative to an origin O when the vehicle moves from the previous state to the current state. The vehicle moves in a circular motion with the center O as an origin. In addition, a position of the center O constantly changes with a vehicle wheel angle. A manner of determining the center O is as follows: if the vehicle turns left, as shown in FIG. 5, a circular coordinate system is created with a point intersected perpendicularly by speed directions (arc tangents) of the left front wheel (the point C) and the left back wheel (the point A) , and if the vehicle turns right, the center is on the right of the vehicle (that is, a horizontal mirror is made for FIG. 5) .
In an embodiment of the present disclosure, the vehicle wheel angle α is calculated according to formulas of
θr = -0.21765 -0.05796ω + 9.62064*10-6ω2 -1.63785*10-8ω3 (1) , and
θl = 0.22268 -0.05814ω -9.89364*10-6ω2 -1.76545*10-8ω3 (2) ,
where θr is a vehicle wheel angle of a right wheel relative to a vehicle body when the vehicle turns right, θl is a vehicle wheel angle of a left wheel relative to the vehicle body when the vehicle turns left, and ω is the steering wheel angle. When the vehicle turns right, α = θr, and when the vehicle turns left, α = θl.
It can be seen from FIG. 5 that, when the vehicle turns left, the left back wheel has a minimum turning radius, where the minimum turning radius Rmin may be calculated according to a formula of
Rmin = AC*cotα        (3) ,
where AC is the wheel base of the vehicle, and α is the vehicle wheel angle.
A rectangular coordinate system is created with the center O as an origin, a direction of Rmin as an X axis, a line that passes through the point O and is upward perpendicular to the X axis as a Y axis. In this way, it can be known that point coordinates positions of the points A, B, and C in the XY coordinate system are (Rmin, 0) , (Rmin+AB, 0) , and (Rmin, AC) respectively.
Further, in an embodiment of the present disclosure, the radius Rmid corresponding to the movement locus of the middle point between the front wheels of the vehicle is calculated by a formula of
Figure PCTCN2016102825-appb-000001
where AC is the wheel base of the vehicle, AB is the wheel tread of the vehicle, and Rmin is the minimum turning radius of the vehicle.
It is assumed that a video processing speed of the panoramic image system of the vehicle reaches a real-time state, that is, 30fps, so an interval between frames is 33 millisecond, which is denoted as T. During T, assuming that the vehicle moves from the block including A, B, C, and D to the block including A', B', C', and D' in FIG. 4, then for a point E, an arc length by which E moves in the V direction is V*T. According to the arc length formula, a central angle by which E turns is denoted by a formula of
Figure PCTCN2016102825-appb-000002
where Rmid is the radius corresponding to the movement locus of the middle point between the front wheels of the vehicle, V is the speed of the vehicle, and T is the period of time taken by the vehicle from the previous state to the current state.
The central angle is also a central angle by which all points on the vehicle turn from the previous state to the current state of the vehicle, that is, θ=β.
Further, an X'Y' rectangular coordinate system is created with OA' as an X axis, and a direction that is upward perpendicular to OA' as a Y' axis. It can be known that, coordinates of the points A', B', and C' in the X'Y' rectangular coordinate system are A' (Rmin, 0) , B' (Rmin+A'B', 0) , and C' (Rmin, A'C') respectively.
Even further, a perpendicular line of OA' is drawn through the point A, and it can be known that a position of the point A in the X'Y' coordinate system is A (Rmin*cosθ, -Rmin*sinθ) . Then  coordinates of B and C in the X'Y' coordinate system may be obtained according to the coordinates of the point A and the central angle θ (that is, β) by which the vehicle turns, as the following:
B: (A. x + AB*cosθ, A. y -AB*sinθ) , and C: (A. x + AC*sinθ, A. y + AC*cosθ) , where A. x =Rmin*cosθ, and A. y = -Rmin*sinθ.
In an embodiment of the present disclosure, the position mapping relationship is calculated according to the first coordinates and the second coordinates in an affine transformation manner, a perspective transformation manner, a four-point bilinear interpolation manner, or the like.
An affine transformation manner is used as an example for description below.
In an embodiment, , when the coordinates of the three points, A, B, and C, in the X'Y' coordinate system in the previous state of the vehicle and the corresponding coordinates of A', B', and C'in the current state are known, by using an affine transformation relational expression, six coefficients in the affine transformation relational expression may be obtained, where the affine transformation relational expression is as follows:
x’= a1*x + b1*y + c       (6) , and
y’= a2*x + b2*y + c2             (7) .
Values of a1, b1, c1, a2, b2, and c2 may be obtained by substituting the foregoing coordinates of the three pairs of points into the formulas (6) and (7) . In this way, the position mapping relationship between the history panoramic image in the previous state and the panoramic image in the current state of the vehicle is obtained.
In an embodiment of the present disclosure, generating the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image includes: calculating, according to the position mapping relationship, positions of all points in the area under the vehicle in the current state that correspond to the previous state of the vehicle; and generating the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to a history panoramic image of the positions that correspond to the previous state of the vehicle.
In an embodiment, the affine transformation relational expression is still used as an example. After the six coefficients in the affine transformation relational expression are obtained, affine transformation is performed on all the points in the area under the vehicle according to the expressions shown in (6) and (7) , and coordinates of points in a history state (that is, the previous state) that correspond to all the points in the current state are obtained. Then, the points in the  history state (that is, the previous state) that correspond to all the points in the current state are used to pad the points in the area in the current state of the vehicle, so as to complete a process of re-stitching and displaying.
In the related art, when the vehicle is traveling, a vehicle in a panoramic image shown in the vehicle is an opaque logo icon, and information on an area under the vehicle cannot be obtained, which is, for example, as shown in FIG. 6. When the method for generating an image of an area under a vehicle in this embodiment of the present disclosure is used, opacity of the logo icon for the vehicle may be changed to show information of an image of the area under the vehicle, so as to achieve the purpose of displaying a blind area under the vehicle body. For example, a displaying effect is shown in FIG. 7.
In the foregoing description of the embodiments, a case in which the vehicle body moves forward and turns left is used as an example, and principles in cases in which the vehicle body moves forward and turns right, moves backward and turns left, and moves backward and turns right are the same as the foregoing principle, and are not described herein.
In addition, it is to be noted that, as shown in FIG. 2, when the vehicle moves from the A state to the B state, the shaded area M in the B state may be padded by the image of the area around the vehicle body in the A state. As the vehicle continues moving, the image of the area under the vehicle is gradually padded to be complete.
According to the method for generating an image of an area under a vehicle in this embodiment of the present disclosure, the speed and the steering wheel angle in the current state of the vehicle are acquired; the history panoramic image in the previous state of the vehicle is acquired; the position mapping relationship between the history panoramic image and the panoramic image in the current state is obtained according to the speed and the steering wheel angle; and the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated according to the position mapping relationship and the history panoramic image. By the method, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
To implement the foregoing embodiment, the present disclosure further provides an apparatus for generating an image of an area under a vehicle.
FIG. 8 is a schematic block diagram of an apparatus for generating an image of an area under a vehicle according to an embodiment of the present disclosure. As shown in FIG. 6, the apparatus for generating an image of an area under a vehicle in this embodiment of the present disclosure includes a traveling information acquisition module 10, a history information acquisition module 20, a mapping relationship acquisition module 30, and a generation module 40.
The traveling information acquisition module 10 is configured to acquire a speed and a steering wheel angle in a current state of the vehicle.
In an embodiment, the traveling information acquisition module 10 may acquire message information about the speed and the steering wheel angle of the vehicle from a CAN network in a vehicle body.
The history information acquisition module 20 is configured to acquire a history panoramic image in a previous state of the vehicle.
The mapping relationship acquisition module 30 is configured to obtain a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle.
For example, as shown in FIG. 2, when the vehicle moves from an A state to a B state, there are two meanings for an area indicated by a shaded area M.
(1) For the current state B of the vehicle, the shaded area M represents a part of an area under the vehicle. This area cannot be captured by a camera, and theoretically, image data cannot be obtained.
(2) For the previous state (that is, a history state) A of the vehicle, the shaded area M represents an image of an area around a vehicle body, and a camera can acquire an image of this area.
It can be known based on the foregoing analysis that, for the vehicle, the history panoramic image of the previous state of the vehicle may be used to pad the image for the area under the vehicle in the current state of the vehicle.
In an embodiment, to ensure that the history panoramic image in the previous state of the vehicle can be accurately padded into the image of the area under the vehicle in the current state, a position mapping relationship between the history panoramic image and the panoramic image in the current state needs to be acquired.
In another embodiment, the mapping relationship acquisition module 30 may calculate the  position mapping relationship between panoramic images of the vehicle in different states according to the speed and the steering wheel angle acquired from the CAN network in the vehicle body.
The generation module 40 is configured to generate the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image in the previous state of the vehicle.
In an embodiment, after the position mapping relationship between the panoramic images of the vehicle in the previous state and the current state is acquired, positions of points in the area under the vehicle in the current state that correspond to the image of the area around the vehicle body in the previous state can be acquired, and thereby the image of the area under the vehicle in the current state can be generated according to the image of the area around the vehicle body of the vehicle in the previous state.
Further, the image of the area under the vehicle in the current state and the image of the area around the vehicle body in the current state are stitched to obtain the panoramic image in the current state for a user to view, and areas shown by the image are fuller, which greatly improves user experience.
In an embodiment of the present disclosure, the mapping relationship acquisition module 30 is configured to: calculate a vehicle wheel angle according to the steering wheel angle; acquire, according to the vehicle wheel angle, a radius corresponding to a movement locus of a middle point between front wheels of the vehicle; calculate, according to the radius and the speed, a central angle by which the vehicle turns from the previous state to the current state; create a coordinate system in the current state of the vehicle; acquire first coordinates of at least three vehicle wheel positions in the coordinate system in the current state of the vehicle, and acquire second coordinates of the at least three vehicle wheel positions in the coordinate system in the previous state of the vehicle according to the central angle; and calculate the position mapping relationship according to the first coordinates and the second coordinates.
In an embodiment of the present disclosure, the mapping relationship acquisition module 30 is configured to: acquire a minimum turning radius of the vehicle according to the vehicle wheel angle; and acquire, according to the minimum turning radius of the vehicle, a radius corresponding to a movement locus of a middle point between front wheels of the vehicle.
A process of obtaining the position mapping relationship by the mapping relationship  acquisition module 30 is described in detail below.
In the related art, as shown in FIG. 3, for example, four cameras, C1, C2, C3, and C4, are installed around the vehicle. Normally, a visible area for panoramic image stitching is the shaded area, and limited by areas captured by the cameras, an area under the vehicle is not visible. An effect intended to be realized by the apparatus for generating an image of an area under a vehicle in this embodiment of the present disclosure is shown in FIG. 4, that is, an image for the area under the vehicle can also be displayed, so as to achieve the purpose of free blind area under the vehicle.
To ensure that the history panoramic image in the previous state of the vehicle can be accurately padded into the image of the area under the vehicle in the current state, current speed information and steering wheel angle information of the vehicle need to be collected by the traveling information acquisition module 10 from the CAN network in the vehicle body. By using these two pieces of information, the mapping relationship acquisition module calculates the position mapping relationship between the history panoramic image and the panoramic image in the current state. Specific implementation are as follows (a special case is replaced by a general case, that is, a turning case is discussed herein) .
Premises: It has been proved that, when a vehicle is turning, at a certain moment, a movement locus of wheels is a circle. As shown in FIG. 5, a block including A, B, C, and D represents a previous state of the vehicle, and a block including A', B', C', and D' represents a current state of the vehicle, where A, B, C, and D, and A', B', C', and D' respectively represent four wheels in the two states. AB represents a wheel tread, and AC represents a wheel base. A vector V represents speed information collected from the CAN network in the vehicle body, and a vector tangent VL of a circle passing through C represents a vector direction in which a left front wheel is moving, and an angle α formed between the vector tangent VL and the vehicle body represents an angle by which the left front wheel turns (which is obtained through calculation according to steering wheel angle information from the CAN network in the vehicle body) . An angle θ represents a radian of the entire vehicle body relative to an origin O when the vehicle moves from the previous state to the current state. The vehicle moves in a circular motion with the center O as an origin. In addition, a position of the center O constantly changes with a vehicle wheel angle. A manner of determining the center O is as follows: if the vehicle turns left, as shown in FIG. 5, a circular coordinate system is created with a point intersected perpendicularly by speed directions (arc tangents) of the left front wheel (the point C) and the left back wheel (the point A) , and if the vehicle turns right, the  center is on the right of the vehicle (that is, a horizontal mirror is made for FIG. 5) .
In an embodiment of the present disclosure, the mapping relationship acquisition module 30 calculates the vehicle wheel angle α according to formula (1) or (2) .
It can be seen from FIG. 5 that, when the vehicle turns left, the left back wheel has a minimum turning radius, where the minimum turning radius Rmin may be calculated according to formula (3) .
A rectangular coordinate system is created with the center O as an origin, a direction of Rmin as an X axis, a line that passes through the point O and is upward perpendicular to the X axis as a Y axis. In this way, it can be known that point coordinates positions of the points A, B, and C in the XY coordinate system are (Rmin, 0) , (Rmin+AB, 0) , and (Rmin, AC) respectively.
Further, in an embodiment of the present disclosure, the mapping relationship acquisition module 30 calculates the radius Rmid corresponding to the movement locus of the middle point between the front wheels of the vehicle according to formula (4) .
It is assumed that a video processing speed of the panoramic image system of the vehicle reaches a real-time state, that is, 30fps, so an interval between frames is 33 millisecond, which is denoted as T. During T, assuming that the vehicle moves from the block including A, B, C, and D to the block including A', B', C', and D' in FIG. 4, then for a point E, an arc length by which E moves in the V direction is V*T. According to the arc length formula, a central angle β by which E turns is as shown in formula (5) . The central angle is also a central angle by which all points on the vehicle turn from the previous state to the current state of the vehicle, that is, θ=β.
Furthermore, an X'Y' rectangular coordinate system is created with OA' as an X axis, and a direction that is upward perpendicular to OA' as a Y' axis. It can be known that, coordinates of the points A', B', and C' in the X'Y' rectangular coordinate system are A' (Rmin, 0) , B' (Rmin+A'B', 0) , and C' (Rmin, A'C') respectively.
Even further, a perpendicular line of OA' is drawn through the point A, and it can be known that a position of the point A in the X'Y' coordinate system is A (Rmin*cosθ, -Rmin*sinθ) . Then coordinates of B and C in the X'Y' coordinate system may be obtained according to the coordinates of the point A and the central angle θ (that is, β) by which the vehicle turns, as the following:
B: (A. x + AB*cosθ, A. y -AB*sinθ) , and C: (A. x + AC*sinθ, A. y + AC*cosθ) , where A. x =Rmin*cosθ, and A. y = -Rmin*sinθ.
In an embodiment of the present disclosure, the mapping relationship acquisition module 30  calculates the position mapping relationship according to the first coordinates and the second coordinates in an affine transformation manner, a perspective transformation manner, or a four-point bilinear interpolation manner.
An affine transformation manner is used as an example for description below. When the coordinates of the three points, A, B, and C, in the X'Y'coordinate system in the previous state of the vehicle and the corresponding coordinates of A', B', and C' in the current state are known, by using an affine transformation relational expression, six coefficients in the affine transformation relational expression may be obtained, where the affine transformation relational expression is as shown in formulas (6) and (7) . Values of a1, b1, c1, a2, b2, and c2 may be obtained by substituting the foregoing coordinates of the three pairs of points into the formulas (6) and (7) . In this way, the position mapping relationship between the history panoramic image in the previous state and the panoramic image in the current state of the vehicle is obtained.
In an embodiment of the present disclosure, the generation module 40 is configured to: calculate, according to the position mapping relationship, positions of all points in the area under the vehicle in the current state that correspond to the previous state of the vehicle; and generate the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to a history panoramic image of the positions that correspond to the previous state of the vehicle.
In an embodiment, the affine transformation relational expression is still used as an example. After the mapping relationship acquisition module 30 obtains the six coefficients in the affine transformation relational expression, the generation module 40 performs affine transformation on all the points in the area under the vehicle according to the expressions shown in (6) and (7) , and obtains coordinates of points in a history state (that is, the previous state) that correspond to all the points in the current state. Then, the generation module 40 uses the points in the history state (that is, the previous state) that correspond to all the points in the current state to pad the points in the area in the current state of the vehicle, so as to complete a process of re-stitching and displaying.
In the related art, when the vehicle is traveling, a vehicle in a panoramic image shown in the vehicle is an opaque logo icon, and information on an area under the vehicle cannot be obtained, which is, for example, as shown in FIG. 6. When the apparatus for generating an image of an area under a vehicle in this embodiment of the present disclosure is used, opacity of the logo icon for the vehicle may be changed to show information of an image of the area under the vehicle, so as to  achieve the purpose of displaying a blind area under the vehicle body. For example, a displaying effect is shown in FIG. 7.
In the foregoing description of the embodiments, a case in which the vehicle body moves forward and turns left is used as an example, and principles in cases in which the vehicle body moves forward and turns right, moves backward and turns left, and moves backward and turns right are the same as the foregoing principle, and are not described herein.
In addition, it is to be noted that, as shown in FIG. 2, when the vehicle moves from the A state to the B state, the shaded area M in the B state may be padded by the image of the area around the vehicle body in the A state. As the vehicle continues moving, the image of the area under the vehicle is gradually padded to be complete.
According to the apparatus for generating an image of an area under a vehicle in the embodiments of the present disclosure, the speed and the steering wheel angle in the current state of the vehicle are acquired by the traveling information acquisition module; the history panoramic image in the previous state of the vehicle is acquired by the history information acquisition module; the position mapping relationship between the history panoramic image and the panoramic image in the current state is obtained by the mapping relationship acquisition module according to the speed and the steering wheel angle; and the image of the area under the vehicle in the panoramic image in the current state of the vehicle is generated by the generation module according to the position mapping relationship and the history panoramic image. By the apparatus, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
To implement the foregoing embodiments, the present disclosure further provides a vehicle. The vehicle includes the apparatus for generating an image of an area under a vehicle in the embodiments of the present disclosure.
Because the vehicle in this embodiment of the present disclosure is equipped with the apparatus for generating an image of an area under a vehicle, a range displayed through panoramic image stitching is extended, so that image information can also be displayed for the area under the vehicle body that is invisible for a camera, which improves safety during driving, enriches panoramic displaying functions, and improves user experience.
In the description of the present disclosure, it is to be understood that, orientation or position  relationships indicated by terms "center" , "longitudinal" , "lateral" , "length" , "width" , "thickness" , "upper" , "lower" , "front" , "back" , "left" , "right" , "vertical" , "horizontal" , "top" , "bottom" , "internal" , "external" , "clockwise" , "anticlockwise" , "axial" , "radial" , "circumferential" , and the like are orientation or position relationships shown in the accompanying drawings, and are for purpose of convenient and simplified description of the present disclosure, rather than for indicating or implying that indicated apparatuses or elements need to be in a particular orientation, or configured and operated in a particular orientation, and therefore should not be understood as limitation to the present disclosure.
In addition, terms "first" and "second" are merely for purpose of description, and should not be understood as indicating or implying relative importance or implicitly specifying a quantity of indicated technical features. Therefore, features limited by "first" and "second" may explicitly or implicitly include at least one feature. In the description of the present disclosure, unless explicitly or specifically specified otherwise, meaning of "multiple" is at least two, for example, two, three, or the like.
In the present disclosure, unless explicitly specified or limited otherwise, terms "mount" , "connected" , "connect" , "fix" , and the like should be understood broadly. For example, a connection may be a fixed connection, or may be a detachable connection, or may be integrated; the connection may be a mechanical connection, or may be an electrical connection; the connection may be a direct connection, or may be an indirect connection through an intermediate medium; and the connection may be an internal connection between two elements or an interactional relationship between two elements, unless explicitly specified otherwise. A person of ordinary skill in the art can understand specific meanings of the foregoing terms in the present disclosure according to specific situations.
In the present disclosure, unless explicitly specified or limited otherwise, a first feature is "above" or "below" a second feature may indicate that the first feature and the second feature contact directly, or that the first feature and the second feature contact through an intermediate medium. Moreover, a first feature is "above" , "over" , or "on" a second feature may indicate that the first feature is right above or slantways above the second feature, or merely indicate that the first feature is higher than the second feature. Moreover, a first feature is "below" or "under" a second feature may indicate that the first feature is right below or slantways below the second feature, or merely indicate that the first feature is lower than the second feature.
In the description of the present disclosure, reference terms "an embodiment" , "some embodiments" , "example" , "specific example" , "some examples" , and the like mean that specific characteristics, structures, materials, or "features" described with reference to the embodiment or example are included in at least one embodiment or example of the present disclosure. In this specification, referring expressions for the foregoing terms do not necessarily mean a same embodiment or example. Moreover, the described specific characteristics, structures, materials, or "features" may be combined in an appropriate manner in any one embodiment or multiple embodiments. In addition, without contradictions, a person skilled in the art may join or combine different embodiments or examples or characteristics of different embodiments or examples described in this specification.
Although the embodiments of the present disclosure have been shown and described above, it can be understood that, the foregoing embodiments are exemplary, and should not be understood as limitation to the present disclosure. A person of ordinary skill in the art may make changes, modifications, replacements, and variations to the foregoing embodiments within the scope of the present disclosure.

Claims (23)

  1. A method for generating an image of an area under a vehicle, comprising:
    acquiring a speed and a steering wheel angle in a current state of the vehicle;
    acquiring a history panoramic image in a previous state of the vehicle;
    obtaining a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle in the current state; and
    generating the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image.
  2. The method according to claim 1, wherein obtaining a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle in the current state comprises:
    obtaining a vehicle wheel angle in the current state according to the steering wheel angle in the current state;
    obtaining a central angle by which the vehicle turns from the previous state to the current state according to the vehicle wheel angle and the speed;
    creating a coordinate system in the current state of the vehicle according to the vehicle wheel angle;
    acquiring first coordinates of at least three vehicle wheel positions in the coordinate system in the current state of the vehicle, and obtaining second coordinates of the at least three vehicle wheel positions in the coordinate system in the previous state of the vehicle according to the central angle; and
    calculating the position mapping relationship according to the first coordinates and the second coordinates.
  3. The method according to claim 2, wherein obtaining a central angle by which the vehicle turns from the previous state to the current state according to the vehicle wheel angle and the speed comprises:
    acquiring a minimum turning radius of the vehicle according to the vehicle wheel angle;
    acquiring a radius corresponding to a movement locus of a middle point between front wheels of the vehicle according to the minimum turning radius of the vehicle; and
    obtaining the central angle by which the vehicle turns from the previous state to the current  state according to the radius and the speed.
  4. The method according to claim 3, wherein the coordinate system is a rectangular coordinate system, an origin of the rectangular coordinate system is obtained according to the vehicle wheel angle, an X axis is in a direction of the minimum turning radius, and a Y axis passes through the origin and is upward perpendicular to the X axis.
  5. The method according to any one of claims 2 to 4, wherein the vehicle wheel angle is calculated according to formulas of
    θr = -0.21765 -0.05796ω + 9.62064*10-6ω2 -1.63785*10-8ω3, and
    θl = 0.22268 -0.05814ω -9.89364*10-6ω2 -1.76545*10-8ω3,
    where θr is a vehicle wheel angle of a right wheel relative to a vehicle body when the vehicle turns right, θl is a vehicle wheel angle of a left wheel relative to the vehicle body when the vehicle turns left, and ω is the steering wheel angle.
  6. The method according to claim 5, wherein the minimum turning radius Rmin of the vehicle is calculated according to a formula of
    Rmin = AC*cotα,
    where AC is a wheel base of the vehicle, and α is the vehicle wheel angle; wherein when the vehicle turns right, α = θr, and when the vehicle turns left, α = θl.
  7. The method according to claim 6, wherein the radius Rmid corresponding to the movement locus of the middle point between the front wheels of the vehicle is calculated according to a formula of
    Figure PCTCN2016102825-appb-100001
    where AC is the wheel base of the vehicle, AB is a wheel tread of the vehicle, and Rmin is the minimum turning radius of the vehicle.
  8. The method according to claim 7, wherein the central angle β by which the vehicle turns from the previous state to the current state is calculated according to a formula of
    Figure PCTCN2016102825-appb-100002
    where Rmid is the radius corresponding to the movement locus of the middle point between the front wheels of the vehicle, V is the speed of the vehicle, and T is a period of time taken by the vehicle from the previous state to the current state.
  9. The method according to any one of claims 1 to 8, wherein the position mapping  relationship is calculated according to the first coordinates and the second coordinates in an affine transformation manner, a perspective transformation manner, or a four-point bilinear interpolation manner.
  10. The method according to any one of claims 1 to 9, wherein generating the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image comprises:
    calculating positions of all points in the area under the vehicle in the current state that correspond to the previous state of the vehicle according to the position mapping relationship; and
    generating the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to a history panoramic image of the positions that correspond to the previous state of the vehicle.
  11. An apparatus for generating an image of an area under a vehicle, comprising:
    a traveling information acquisition module, configured to acquire a speed and a steering wheel angle in a current state of the vehicle;
    a history information acquisition module, configured to acquire a history panoramic image in a previous state of the vehicle;
    a mapping relationship acquisition module, configured to obtain a position mapping relationship between the history panoramic image and a panoramic image in the current state according to the speed and the steering wheel angle; and
    a generation module, configured to generate the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to the position mapping relationship and the history panoramic image.
  12. The apparatus according to claim 11, wherein the mapping relationship acquisition module is further configured to:
    obtain a vehicle wheel angle in the current state according to the steering wheel angle in the current state;
    obtain a central angle by which the vehicle turns from the previous state to the current state according to the vehicle wheel angle and the speed;
    create a coordinate system in the current state of the vehicle according to the vehicle wheel angle;
    acquire first coordinates of at least three vehicle wheel positions in the coordinate system in  the current state of the vehicle, and obtain second coordinates of the at least three vehicle wheel positions in the coordinate system in the previous state of the vehicle according to the central angle; and
    calculate the position mapping relationship according to the first coordinates and the second coordinates.
  13. The apparatus according to claim 12, wherein the mapping relationship acquisition module is further configured to:
    acquire a minimum turning radius of the vehicle according to the vehicle wheel angle;
    acquire a radius corresponding to a movement locus of a middle point between front wheels of the vehicle according to the minimum turning radius of the vehicle; and
    obtain the central angle by which the vehicle turns from the previous state to the current state according to the radius and the speed.
  14. The apparatus according to claim 13, wherein the coordinate system is a rectangular coordinate system, an origin of the rectangular coordinate system is obtained according to the vehicle wheel angle, and an X axis is in a direction of the minimum turning radius and a Y axis passes through the origin and is upward perpendicular to the X axis.
  15. The apparatus according to any one of claims 11 to 14, wherein the mapping relationship acquisition module is configured to calculate the vehicle wheel angle according to formulas of
    θr = -0.21765 -0.05796ω + 9.62064*10-6ω2 -1.63785*10-8ω3, and
    θl = 0.22268 -0.05814ω -9.89364*10-6ω2 -1.76545*10-8ω3,
    where θr is a vehicle wheel angle of a right wheel relative to a vehicle body when the vehicle turns right, θl is a vehicle wheel angle of a left wheel relative to the vehicle body when the vehicle turns left, and ω is the steering wheel angle.
  16. The apparatus according to claim 15, wherein the mapping relationship acquisition module is configured to calculate the minimum turning radius Rmin of the vehicle according to a formula of
    Rmin = AC*cotα,
    where AC is a wheel base of the vehicle, and α is the vehicle wheel angle, wherein when the vehicle turns right, α = θr, and when the vehicle turns left, α = θl.
  17. The apparatus according to claim 16, wherein the mapping relationship acquisition module is configured to calculate the radius Rmid corresponding to the movement locus of the  middle point between the front wheels of the vehicle according to a formula of
    Figure PCTCN2016102825-appb-100003
    where AC is the wheel base of the vehicle, AB is a wheel tread of the vehicle, and Rmin is the minimum turning radius of the vehicle.
  18. The apparatus according to claim 17, wherein the mapping relationship acquisition module is configured to calculate the central angle β by which the vehicle turns from the previous state to the current state according to a formula of
    Figure PCTCN2016102825-appb-100004
    where Rmid is the radius corresponding to the movement locus of the middle point between the front wheels of the vehicle, V is the speed of the vehicle, and T is a period of time taken by the vehicle from the previous state to the current state.
  19. The apparatus according to any one of claims 11 to 18, wherein the generation module is further configured to:
    calculate positions of all points in the area under the vehicle in the current state that correspond to the previous state of the vehicle according to the position mapping relationship; and
    generate the image of the area under the vehicle in the panoramic image in the current state of the vehicle according to a history panoramic image of the positions that correspond to the previous state of the vehicle.
  20. A vehicle, comprising the apparatus for generating an image of an area under a vehicle according to any one of claims 11 to 19.
  21. An electronic device, comprising:
    a shell;
    a processor;
    a memory;
    a circuit board; and
    a power supply circuit, wherein the circuit board is located in a space formed by the shell, the processor and the memory are arranged on the circuit board; the power supply circuit is configured to supply power for each circuit or component in the mobile terminal; the memory is configured to store executable program codes; the processor is configured to execute a program corresponding to the executable program codes by reading the executable program codes stored in the memory so as  to perform the method according to any one of claims 1 to 10.
  22. A storage medium having one or more modules stored therein, wherein the one or more modules are caused to perform the method according to any one of claims 1 to 10.
  23. An application program configured to perform the method according to any one of claims 1 to 10 when executed.
PCT/CN2016/102825 2015-10-22 2016-10-21 Method and apparatus for generating image of area under vehicle, and vehicle WO2017067495A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510690780.1A CN106608220B (en) 2015-10-22 2015-10-22 Generation method, device and the vehicle of vehicle bottom image
CN201510690780.1 2015-10-22

Publications (1)

Publication Number Publication Date
WO2017067495A1 true WO2017067495A1 (en) 2017-04-27

Family

ID=58556682

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/102825 WO2017067495A1 (en) 2015-10-22 2016-10-21 Method and apparatus for generating image of area under vehicle, and vehicle

Country Status (2)

Country Link
CN (1) CN106608220B (en)
WO (1) WO2017067495A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107888894A (en) * 2017-10-12 2018-04-06 浙江零跑科技有限公司 A kind of solid is vehicle-mounted to look around method, system and vehicle-mounted control device
DE102020209513A1 (en) 2020-07-29 2022-02-03 Volkswagen Aktiengesellschaft Automatic inspection of an area under a motor vehicle
CN114312577A (en) * 2022-02-17 2022-04-12 镁佳(北京)科技有限公司 Vehicle chassis perspective method and device and electronic equipment

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107396057B (en) * 2017-08-22 2019-12-20 纵目科技(厦门)有限公司 Splicing method for splicing three-dimensional panoramic images based on five visual angles of vehicle-mounted camera
CN109532714B (en) * 2017-09-21 2020-10-23 比亚迪股份有限公司 Method and system for acquiring vehicle bottom image and vehicle
CN109552174B (en) * 2017-09-26 2024-08-16 纵目科技(上海)股份有限公司 Full-view camera host control unit
CN109552173A (en) * 2017-09-26 2019-04-02 纵目科技(上海)股份有限公司 Full visual field camera engine control system
CN108312966A (en) * 2018-02-26 2018-07-24 江苏裕兰信息科技有限公司 A kind of panoramic looking-around system and its implementation comprising bottom of car image
CN110246358A (en) * 2018-03-08 2019-09-17 比亚迪股份有限公司 Method, vehicle and system for parking stall where positioning vehicle
CN110246359A (en) * 2018-03-08 2019-09-17 比亚迪股份有限公司 Method, vehicle and system for parking stall where positioning vehicle
CN108909625B (en) * 2018-06-22 2021-09-17 河海大学常州校区 Vehicle bottom ground display method based on panoramic all-round viewing system
CN108810417A (en) * 2018-07-04 2018-11-13 深圳市歌美迪电子技术发展有限公司 A kind of image processing method, mechanism and rearview mirror
CN110969574A (en) * 2018-09-29 2020-04-07 广州汽车集团股份有限公司 Vehicle-mounted panoramic map creation method and device
CN112215747A (en) * 2019-07-12 2021-01-12 杭州海康威视数字技术股份有限公司 Method and device for generating vehicle-mounted panoramic picture without vehicle bottom blind area and storage medium
CN110458884A (en) * 2019-08-16 2019-11-15 北京茵沃汽车科技有限公司 Method, apparatus, the medium of vehicle operation state trajectory line are generated in panorama sketch
CN110503660A (en) * 2019-08-21 2019-11-26 东软睿驰汽车技术(沈阳)有限公司 Vehicle is towards recognition methods, device, emulator and unmanned emulation mode
CN111959397B (en) * 2020-08-24 2023-03-31 北京茵沃汽车科技有限公司 Method, system, device and medium for displaying vehicle bottom image in panoramic image
CN112488995B (en) * 2020-11-18 2023-12-12 成都主导软件技术有限公司 Intelligent damage judging method and system for automatic maintenance of train
CN113850881A (en) * 2021-08-31 2021-12-28 湖北亿咖通科技有限公司 Image generation method, device, equipment and readable storage medium
CN114162048A (en) * 2021-12-08 2022-03-11 上海寅家电子科技股份有限公司 System and method for ensuring safe driving of vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004026115A (en) * 2002-06-28 2004-01-29 Nissan Motor Co Ltd Dead angle monitoring device for vehicle
CN1473433A (en) * 2001-06-13 2004-02-04 ��ʽ�����װ Peripheral image processor of vehicle and recording medium
JP2004064441A (en) * 2002-07-29 2004-02-26 Sumitomo Electric Ind Ltd Onboard image processor and ambient monitor system
CN1629930A (en) * 2003-12-17 2005-06-22 株式会社电装 Vehicle information display system
CN101204957A (en) * 2006-12-20 2008-06-25 财团法人工业技术研究院 Traffic lane offset warning method and device
CN104335576A (en) * 2012-05-31 2015-02-04 罗伯特·博世有限公司 Device and method for recording images of a vehicle underbody

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001002181A (en) * 1999-06-28 2001-01-09 Vantec Corp Side wall face frame structure for container box
JP4021662B2 (en) * 2001-12-28 2007-12-12 松下電器産業株式会社 Driving support device, image output device, and rod with camera
CN101945257B (en) * 2010-08-27 2012-03-28 南京大学 Synthesis method for extracting chassis image of vehicle based on monitoring video content
DE102012211791B4 (en) * 2012-07-06 2017-10-12 Robert Bosch Gmbh Method and arrangement for testing a vehicle underbody of a motor vehicle
CN103072528A (en) * 2013-01-30 2013-05-01 深圳市汉华安道科技有限责任公司 Vehicle and panoramic parking method and system thereof
CN103661599B (en) * 2013-12-04 2016-01-06 奇瑞汽车股份有限公司 A kind of turn inside diameter trajectory predictions system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1473433A (en) * 2001-06-13 2004-02-04 ��ʽ�����װ Peripheral image processor of vehicle and recording medium
JP2004026115A (en) * 2002-06-28 2004-01-29 Nissan Motor Co Ltd Dead angle monitoring device for vehicle
JP2004064441A (en) * 2002-07-29 2004-02-26 Sumitomo Electric Ind Ltd Onboard image processor and ambient monitor system
CN1629930A (en) * 2003-12-17 2005-06-22 株式会社电装 Vehicle information display system
CN101204957A (en) * 2006-12-20 2008-06-25 财团法人工业技术研究院 Traffic lane offset warning method and device
CN104335576A (en) * 2012-05-31 2015-02-04 罗伯特·博世有限公司 Device and method for recording images of a vehicle underbody

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107888894A (en) * 2017-10-12 2018-04-06 浙江零跑科技有限公司 A kind of solid is vehicle-mounted to look around method, system and vehicle-mounted control device
CN107888894B (en) * 2017-10-12 2019-11-05 浙江零跑科技有限公司 A kind of solid is vehicle-mounted to look around method, system and vehicle-mounted control device
DE102020209513A1 (en) 2020-07-29 2022-02-03 Volkswagen Aktiengesellschaft Automatic inspection of an area under a motor vehicle
CN114312577A (en) * 2022-02-17 2022-04-12 镁佳(北京)科技有限公司 Vehicle chassis perspective method and device and electronic equipment
CN114312577B (en) * 2022-02-17 2022-11-29 镁佳(北京)科技有限公司 Vehicle chassis perspective method and device and electronic equipment

Also Published As

Publication number Publication date
CN106608220A (en) 2017-05-03
CN106608220B (en) 2019-06-25

Similar Documents

Publication Publication Date Title
WO2017067495A1 (en) Method and apparatus for generating image of area under vehicle, and vehicle
JP7448921B2 (en) Rear stitched view panorama for rear view visualization
US9367964B2 (en) Image processing device, image processing method, and program for display of a menu on a ground surface for selection with a user's foot
US8933966B2 (en) Image processing device, image processing method and program
CN103841332B (en) Panorama scene is shot and the mobile device of browsing, system and method
CN109313799B (en) Image processing method and apparatus
CN111353930B (en) Data processing method and device, electronic equipment and storage medium
KR20220092928A (en) Point cloud labeling methods, devices, electronic devices, storage media and program products
US8699749B2 (en) Computer-readable storage medium, image processing apparatus, image processing system, and image processing method
US20190043160A1 (en) Equatorial stitching of hemispherical images in a spherical image capture system
WO2018132231A1 (en) Apparatus and methods for the storage of overlapping regions of imaging data for the generation of optimized stitched images
US20150077591A1 (en) Information processing device and information processing method
EP2492873B1 (en) Image processing program, image processing apparatus, image processing system, and image processing method
JPWO2006093250A1 (en) Motion measuring device, motion measuring system, vehicle-mounted device, motion measuring method, motion measuring program, and computer-readable recording medium
CN106997579B (en) Image splicing method and device
CN110599593B (en) Data synthesis method, device, equipment and storage medium
US9852494B2 (en) Overhead image generation apparatus
Zhang et al. A novel absolute localization estimation of a target with monocular vision
CN107404615A (en) Picture recording method and electronic equipment
CN114881863B (en) Image splicing method, electronic equipment and computer readable storage medium
da Silveira et al. Omnidirectional visual computing: Foundations, challenges, and applications
Chew et al. Panorama stitching using overlap area weighted image plane projection and dynamic programming for visual localization
CN103489165B (en) A kind of decimal towards video-splicing searches table generating method
CN111179341A (en) Registration method of augmented reality equipment and mobile robot
CN106295570B (en) Filtration system and method are blocked in interaction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16856926

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16856926

Country of ref document: EP

Kind code of ref document: A1