WO2020133172A1 - Procédé de traitement d'images, appareil et support d'informations lisible par ordinateur - Google Patents

Procédé de traitement d'images, appareil et support d'informations lisible par ordinateur Download PDF

Info

Publication number
WO2020133172A1
WO2020133172A1 PCT/CN2018/124726 CN2018124726W WO2020133172A1 WO 2020133172 A1 WO2020133172 A1 WO 2020133172A1 CN 2018124726 W CN2018124726 W CN 2018124726W WO 2020133172 A1 WO2020133172 A1 WO 2020133172A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matrix
head
rotation
target
Prior art date
Application number
PCT/CN2018/124726
Other languages
English (en)
Chinese (zh)
Inventor
崔健
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880068957.6A priority Critical patent/CN111279354A/zh
Priority to PCT/CN2018/124726 priority patent/WO2020133172A1/fr
Publication of WO2020133172A1 publication Critical patent/WO2020133172A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Definitions

  • Embodiments of the present invention relate to the field of image processing technology, and in particular, to an image processing method, device, and computer-readable storage medium.
  • lane line algorithms play an important role.
  • the accuracy of lane line algorithms will directly affect the performance and reliability of the system.
  • the lane line algorithm is automatic driving An important prerequisite for car control.
  • the lane line algorithm is divided into two levels, one is the detection of the lane line, and the other is the positioning of the lane line, which is to calculate the actual positional relationship between the lane line and the car.
  • the traditional lane line detection algorithm can collect the head-up image through the shooting device, and use the head-up image to detect the lane line.
  • the traditional lane line positioning algorithm can collect the head-up image through the shooting device, and use the head-up image to locate the lane line.
  • the detection result is inaccurate.
  • the size and nature of the lane line in the head-up image are all through perspective projection, which has the effect of "near big and far small", resulting in distant Some pavement markers are distorted in shape and cannot be detected correctly.
  • the positioning result is not accurate.
  • the shape and size of the road surface marker in the head-up image are coupled with the positional relationship between the internal parameters of the camera, the camera and the road surface, It is impossible to directly know the actual position of the lane line by looking directly at the position in the image.
  • the invention provides an image processing method, device and computer-readable storage medium, which can improve the accuracy of detection of lane lines and accurately locate the actual positional relationship between lane lines and vehicles.
  • a driving assistance device including at least one photographing device, a processor, and a memory; the driving assistance device is provided on a vehicle and communicates with the vehicle; the memory, For storing computer instructions executable by the processor;
  • the photographing device is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • a vehicle equipped with a driving assistance system includes at least one camera, a processor, and a memory.
  • the memory is used to store computer instructions executable by the processor.
  • the shooting device is used to collect a head-up image containing a target object, and send the head-up image containing a target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • an image processing method is provided, which is applied to a driving assistance system.
  • the driving assistance system includes at least one photographing device.
  • the method includes:
  • the head-up image is converted into a top-down image according to the relative posture.
  • a computer-readable storage medium is provided.
  • Computer instructions are stored on the computer-readable storage medium. When the computer instructions are executed, the above method is implemented.
  • the accuracy of the lane line detection can be improved, and the actual positional relationship between the lane line and the vehicle can be accurately located.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image can be used to detect the lane line, thereby improving the accuracy of the lane line detection result.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image is used to locate the lane line, thereby improving the accuracy of the lane line positioning result and accurately knowing the actual position of the lane line.
  • FIG. 1 is a schematic diagram of an example of an image processing method in an embodiment
  • FIG. 2 is a schematic diagram of an example of an image processing method in another embodiment
  • FIG. 3 is a schematic diagram of an example of an image processing method in another embodiment
  • 4A is a schematic diagram of a head-up image and a top-down image of an image processing method in an embodiment
  • 4B is a schematic diagram of the relationship between the target object, the space plane and the camera in an embodiment
  • FIG. 5 is a block diagram of an example of a driving assistance device in an embodiment.
  • first, second, third, etc. may be used to describe various information in the present invention, the information should not be limited to these terms. These terms are used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” can be interpreted as "when", or "when”, or "in response to a determination”.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system may be mounted on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.), or the driving assistance system may also be mounted on driving assistance equipment (such as ADAS equipment, etc.), and the driving assistance equipment It is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • a mobile platform such as unmanned vehicles, ordinary vehicles, etc.
  • driving assistance equipment such as ADAS equipment, etc.
  • the driving assistance equipment It is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • the above is only an example of two application scenarios, and the driving assistance system can also be carried on other vehicles, which is not limited.
  • the method may include:
  • Step 101 Obtain a head-up image containing a target object through a camera.
  • the at least one shooting device is installed on the mobile platform, and at least one of the front, rear, left, or right of the mobile platform can be acquired through the shooting device
  • the head-up image of the direction, the head-up image contains the target object.
  • the at least one imaging device is provided in the driving assistance device, and at least one of the front, rear, left, or right directions of the driving assistance device can be acquired through the imaging device
  • the head-up image contains the target object.
  • Step 102 Determine a space plane corresponding to the target object.
  • the first posture information of the mobile platform (that is, the current posture information of the mobile platform) may be acquired, and the space plane may be determined according to the first posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • the second posture information of the driving assistance device (that is, the current posture information of the driving assistance device) may be acquired, and the space plane may be determined according to the second posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • Step 103 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the shooting device relative to the space plane (such as road surface or ground), and can also be understood as the external parameter (that is, positional relationship) of the shooting device relative to the space plane .
  • the relative posture may include, but is not limited to: a pitch angle of the camera relative to the space plane, a roll angle of the camera relative to the space plane, and the camera relative to space The yaw of the plane, the height of the camera relative to the space plane, and the translation parameter of the camera relative to the space plane.
  • Step 104 Convert the head-up image to the top-down image according to the relative posture.
  • the projection matrix corresponding to the head-up image can be obtained according to the relative pose; for example, the target rotation matrix can be determined according to the relative pose, the target rotation parameter can be obtained according to the target rotation matrix, and the relative pose and the target rotation parameter can be obtained The projection matrix corresponding to the head-up image. Then, the head-up image can be converted into a bird's-eye view image according to the projection matrix.
  • the relative attitude includes the rotation angle of the camera on the pitch axis (that is, the pitch angle of the camera relative to the space plane), the rotation angle on the roll axis (that is, the roll angle of the camera relative to the space plane), and The rotation angle of the yaw axis (that is, the yaw angle of the camera relative to the space plane); based on this, the target rotation matrix is determined according to the relative attitude, which may include, but is not limited to: determining the first angle according to the rotation angle of the camera on the pitch axis A rotation matrix; determine the second rotation matrix according to the rotation angle of the camera on the roll axis; determine the third rotation matrix according to the rotation angle of the camera on the yaw axis; according to the first rotation matrix, the second rotation matrix, and the third rotation The matrix determines the target rotation matrix.
  • the target rotation matrix may include three column vectors, and the target rotation parameters may be obtained according to the target rotation matrix, which may include but not limited to: determine the first column vector in the target rotation matrix as the first rotation parameter, and determine The second column vector in the target rotation matrix is determined as the second rotation parameter; the first rotation parameter and the second rotation parameter are determined as the target rotation parameter.
  • the relative posture also includes a translation parameter of the space plane and the shooting device (that is, a translation parameter of the shooting device relative to the space plane), and obtaining a projection matrix according to the relative posture and the target rotation parameter may include but not limited to: The target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device are obtained, and the projection matrix is obtained.
  • converting the head-up image into a bird's-eye view image according to the projection matrix may include, but is not limited to: for each first pixel in the head-up image, the The position information is converted into position information of the second pixel in the overhead image; based on this, the overhead image can be obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • the lane line can be detected based on the bird's-eye image.
  • the lane line can be positioned according to the bird's-eye view image.
  • the lane line detection can be performed based on the top view image (not the lane line detection based on the head-up image) to improve the accuracy of the lane line detection.
  • the lane line positioning is performed based on the top view image (not the lane line positioning based on the head-up image) to improve the accuracy of the lane line positioning.
  • the accuracy of the lane line detection can be improved, and the actual positional relationship between the lane line and the vehicle can be accurately located.
  • the head-up image can be converted into a top-down image, and the top-down image can be used to detect the lane line, thereby improving the accuracy of the lane-line detection result.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image is used to locate the lane line, thereby improving the accuracy of the lane line positioning result and accurately knowing the actual position of the lane line.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system can be mounted on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • a mobile platform such as unmanned vehicles, ordinary vehicles, etc.
  • the driving assistance system can also be mounted on other vehicles. limit.
  • the method may include:
  • Step 201 Obtain a head-up image containing a target object through a camera.
  • the head-up image in at least one direction of the front, back, left, or right direction of the mobile platform may be acquired by the shooting device, and the head-up image includes a target object.
  • Step 202 Determine the space plane corresponding to the target object according to the first pose information of the mobile platform.
  • the first posture information of the mobile platform may be obtained, and the space plane may be determined according to the first posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • the process of acquiring the first posture information of the mobile platform may include a posture sensor, the posture sensor collects the first posture information of the mobile platform, and provides the first posture information to the driving assistance system to enable driving assistance The system acquires the first posture information of the mobile platform.
  • the first posture information of the mobile platform can also be obtained in other ways, which is not limited.
  • the attitude sensor is a high-performance three-dimensional motion attitude measurement system, which can include three-axis gyroscope, three-axis accelerometer (ie IMU), three-axis electronic compass and other auxiliary motion sensors, and output calibration through the embedded processor Sensor data such as angular velocity, acceleration, magnetic data, etc., and then, posture information can be measured based on the sensor data, and there is no restriction on the manner of acquiring posture information.
  • the process of determining the space plane corresponding to the target object according to the first pose information, after obtaining the first pose information of the mobile platform, the space plane can be determined according to the first pose information. I will not repeat them here.
  • Step 203 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the camera relative to the space plane, and can also be understood as the external parameter (ie, positional relationship) of the camera relative to the space plane.
  • the relative attitude may include but is not limited to: the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, the yaw angle of the camera relative to the space plane, and the height of the camera relative to the space plane , The translation of the camera relative to the space plane.
  • Step 204 Acquire a projection matrix corresponding to the head-up image according to the relative posture.
  • a target rotation matrix may be determined according to the relative pose
  • a target rotation parameter may be obtained according to the target rotation matrix
  • a projection matrix corresponding to the head-up image may be obtained according to the relative pose and the target rotation parameter.
  • Step 205 Convert the head-up image to the top-down image according to the projection matrix.
  • the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix; based on this, it can be
  • the top view image is obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system can also be equipped with driving assistance equipment (such as ADAS equipment, etc.), and the driving assistance equipment is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.), of course, the above is only the application of the present invention
  • the driving assistance system can also be mounted on other vehicles, and there is no restriction on this.
  • the method may include:
  • Step 301 Obtain a head-up image containing a target object through a camera.
  • the head-up image in at least one direction of the front, rear, left, or right direction of the driving assistance device may be acquired by the shooting device, and the head-up image includes a target object.
  • Step 302 Determine a space plane corresponding to the target object according to the second posture information of the driving assistance device.
  • the space plane refers to the target object, that is, the position plane of the road surface or the ground in the world coordinate system.
  • the second posture information of the driving assistance device may be acquired, and the space plane may be determined according to the second posture information.
  • the driving assistance device may include a posture sensor, and this posture sensor is used to collect the second posture information of the driving assistance device and provide the second posture information to the driving assistance system, so that the driving assistance system acquires the second posture of the driving assistance device information.
  • the mobile platform may include an attitude sensor, the attitude sensor collects the first attitude information of the mobile platform, and provides the first attitude information to the driving assistance system.
  • the driving assistance system may use the first attitude information of the mobile platform as the first position of the driving assistance device.
  • the second posture information is the second posture information of the driving assistance device.
  • the second posture information can also be obtained in other ways, which is not limited.
  • Step 303 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the camera relative to the space plane, and can also be understood as the external parameter (ie, positional relationship) of the camera relative to the space plane.
  • the relative attitude may include but is not limited to: the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, the yaw angle of the camera relative to the space plane, and the height of the camera relative to the space plane , The translation of the camera relative to the space plane.
  • Step 304 Obtain a projection matrix corresponding to the head-up image according to the relative posture.
  • a target rotation matrix may be determined according to the relative pose
  • a target rotation parameter may be obtained according to the target rotation matrix
  • a projection matrix corresponding to the head-up image may be obtained according to the relative pose and the target rotation parameter.
  • Step 305 Convert the head-up image to the top-down image according to the projection matrix.
  • the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix; based on this, it can be
  • the top view image is obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • Embodiment 4 A subsequent description will be made by taking a mobile platform as a vehicle and a shooting device as a camera as an example.
  • the traditional lane line algorithm can collect the head-up image through the camera, and use the head-up image to detect and locate the lane line.
  • the image on the left is a schematic diagram of the head-up image.
  • the road surface arrow and the lane line are twisted, and the shape is related to the position of the vehicle.
  • the lane line cannot be correctly performed based on the left head-up image in FIG. 4A. Detection and positioning.
  • the head-up image is converted into a bird's-eye image, and the bird's-eye image is used to detect and locate the lane line.
  • the image on the right is a schematic diagram of a top-down image.
  • the arrows of the road surface markers and the lane lines are restored to true scales.
  • the positions of the points on the road surface directly correspond to the real positions, and the positional relationship between a certain point and the vehicle can be directly obtained. It can meet the requirements of ADAS function and automatic driving function. Obviously, the detection and positioning of the lane line can be correctly performed based on the top view image on the right side of FIG. 4A.
  • the accuracy of road surface marker recognition can be improved, and a method for locating road surface markers (including lane lines) can be provided to assist in positioning.
  • the head-up image in order to convert the head-up image into a top-down image, it can be implemented based on the geometric knowledge of computer vision, that is, convert the head-up image into a top-down image based on homography.
  • the shape of the top-down image depends on the true shape of the head-up image of the space plane, the internal parameters of the camera, and the external parameters of the camera (that is, the camera relative to the space plane Position relationship), therefore, the pixels in the head-up image can be directly mapped to the top-down image according to the internal parameters of the camera and the external parameters of the camera, so as to correspond to the true scale of the spatial plane, improve the accuracy of lane line recognition, and provide Accurate positioning method of lane line.
  • FIG. 4B it is a schematic diagram of the relationship between the target object, the space plane and the camera.
  • the space plane is a plane including the target object, and the plane where the camera is located may be different from the space plane.
  • the target object may be a road (pavement or ground) containing lane lines as shown in the figure
  • the spatial plane may be the plane where the target object is the road surface.
  • the actual shooting screen of the camera is shown in the lower right corner of FIG. 4B, which corresponds to the head-up image on the left side of FIG. 4A.
  • homography can be expressed by the following formula, (u,v) is the pixel in the head-up image, that is, the pixel in the spatial plane, s is the normalization coefficient, M is the camera internal parameter, [r 1 r 2 r 3 t] is the external parameter of the camera to the space plane, that is, the positional relationship, r 1 is a 3*1 column vector, r 2 is a 3*1 column vector, and r 3 is a 3*1 column vector, and r 1 , r 2 and r 3 form a rotation matrix, and t is a column vector of 3*1, which represents the translation of the camera to the object plane, that is, r 1 , r 2 and r 3 form the rotation matrix and the translation t constitutes the camera pair
  • the external parameters of the space plane, (X, Y) are the pixels in the overhead image, that is, the pixels in the image coordinate system.
  • the pixels in the overhead image can be (X, Y, Z), but considering that the target object is in a plane, that is, Z is 0, therefore, the product of r 3 and Z is 0, that is to say After converting the homography formula, r 3 and Z can be eliminated from the formula, and finally the following formula can be obtained.
  • the image processing method in the embodiment of the present invention may include:
  • Step a1 Obtain a head-up image containing a target object through a camera.
  • Each pixel in the head-up image is called a first pixel, and each first pixel can be the above (u, v).
  • Step a2 Determine the space plane corresponding to the target object.
  • the spatial plane refers to the position plane of the target object, that is, the road surface or ground on which it is located in the world coordinate system.
  • Step a3 Determine the relative posture of the space plane and the camera.
  • the relative posture can be the external parameter of the camera relative to the space plane (that is, the positional relationship), such as the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, and the camera relative to the space.
  • Step a4 Determine the target rotation matrix according to the relative posture.
  • a pitch angle of the camera relative to the space plane, a roll angle of the camera relative to the space plane, and a yaw angle of the camera relative to the space plane can be determined.
  • the first rotation matrix R x can be determined according to the following formula based on the camera rotation angle on the pitch axis; the second rotation can be determined based on the camera rotation angle on the roll axis, and the second rotation can be determined based on the following formula Matrix R y ; it can be based on the rotation angle (yaw) of the camera on the yaw axis, and the third rotation matrix R z can be determined according to the following formula.
  • the target rotation matrix R After obtaining the first rotation matrix, the second rotation matrix, and the third rotation matrix, based on the first rotation matrix, the second rotation matrix, and the third rotation matrix, the target rotation matrix R may be determined according to the following formula.
  • Step a5 Obtain the target rotation parameter according to the target rotation matrix.
  • the first column vector in the target rotation matrix R can be determined as the first rotation parameter
  • the second column vector in the target rotation matrix R can be determined as the second rotation parameter
  • the first rotation parameter and the first The second rotation parameter is determined as the target rotation parameter.
  • the first rotation parameter is r 1 in the above formula
  • r 1 is a column vector of 3*1
  • the second rotation parameter is r 2 in the above formula, r 2 is a column vector of 3*1.
  • Step a6 Obtain a projection matrix according to the target rotation parameters r 1 and r 2 , the normalization coefficient, the camera's internal parameter matrix, and the translation parameter t.
  • the projection matrix may be H in the above formula.
  • the normalization coefficient can be s in the above formula
  • the projection matrix H can be determined.
  • the camera's internal parameter matrix M can be In the aforementioned internal reference matrix M, f x , f y can represent the focal length of the camera, c x , c y can represent the position of the camera lens optical axis through the imaging sensor f x , f y , c x , c y is a known value, there is no restriction on this.
  • the head-up image can be converted into a bird's-eye view image according to the projection matrix.
  • the position information of the first pixel can be converted to the position of the second pixel (X, Y) in the bird's-eye view image according to the projection matrix H Information, and obtain a bird's-eye view image according to the position information of each second pixel (X, Y), that is, the second pixel constitutes a bird's-eye view image.
  • H Information the projection matrix
  • an embodiment of the present invention also provides a driving assistance device 50 that includes at least one photographing device 51, a processor 52, and a memory 53; the driving The auxiliary device 50 is provided on the vehicle and communicates with the vehicle; the memory 53 is used to store computer instructions executable by the processor;
  • the shooting device 51 is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor 52;
  • the processor 52 is configured to read computer instructions from the memory 53 to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • the imaging device 51 is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the driving assistance device.
  • the processor 52 determines the space plane corresponding to the target object, it is specifically used to:
  • the spatial plane is determined according to the second posture information.
  • the processor 52 converts the head-up image into a top-down image according to the relative posture, it is specifically used to: obtain a projection matrix corresponding to the head-up image according to the relative posture;
  • the head-up image is converted into a top-down image according to the projection matrix.
  • the processor 52 obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to: determine the target rotation matrix according to the relative posture;
  • the relative attitude includes the rotation angle of the shooting device on the pitch axis, the rotation angle on the roll axis, and the rotation angle on the yaw axis; the processor 52 is specifically used when determining the target rotation matrix according to the relative attitude : Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
  • the target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  • the processor 52 obtains a target rotation parameter according to the target rotation matrix, it is specifically used to:
  • the first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  • the relative posture also includes translation parameters of the spatial plane and the shooting device; the processor 52 is specifically used when acquiring the projection matrix according to the relative posture and the target rotation parameter: according to the target rotation
  • the parameters, the normalization coefficient, the internal parameter matrix of the photographing device, the spatial plane and the translation parameters of the photographing device are used to obtain the projection matrix.
  • the processor 52 converts the head-up image into a top-down image according to the projection matrix, it is specifically used to: for each first pixel in the head-up image, convert the first pixel according to the projection matrix The position information of is converted into the position information of the second pixel in the overhead image;
  • the top view image is obtained according to the position information of each second pixel.
  • the processor 52 converts the position information of the first pixel into the position information of the second pixel in the bird's-eye view image according to the projection matrix, it is specifically used to:
  • an embodiment of the present invention also provides a vehicle equipped with a driving assistance system.
  • the vehicle includes at least one camera, a processor, and a memory.
  • the memory is used to store the processor.
  • Computer instructions executed; the shooting device is used to collect a head-up image containing a target object, and send the head-up image containing the target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • the photographing device is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the vehicle.
  • the processor determines the space plane corresponding to the target object, it is specifically used to: obtain first pose information of the vehicle; and determine the space plane according to the first pose information.
  • the processor converts the head-up image into a bird's-eye view image according to the relative posture, it is specifically used to: obtain a projection matrix corresponding to the head-up image according to the relative posture;
  • the head-up image is converted into a top-down image according to the projection matrix.
  • the processor obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to: determine a target rotation matrix according to the relative posture;
  • the relative attitude includes the rotation angle of the shooting device on the pitch axis, the rotation angle on the roll axis, and the rotation angle on the yaw axis; when the processor determines the target rotation matrix according to the relative attitude, it is specifically used to: Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
  • the target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  • the processor obtains the target rotation parameter according to the target rotation matrix, it is specifically used to:
  • the first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  • the relative posture also includes translation parameters of the spatial plane and the shooting device; the processor is specifically used when acquiring the projection matrix according to the relative posture and the target rotation parameter:
  • the projection matrix is obtained according to the target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device.
  • the processor converts the head-up image into a top-down image according to the projection matrix, it is specifically used to: for each first pixel in the head-up image, convert the first pixel according to the projection matrix.
  • the position information is converted into the position information of the second pixel in the overhead image;
  • the top view image is obtained according to the position information of each second pixel.
  • the processor converts the position information of the first pixel point into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, it is specifically used to:
  • An embodiment of the present invention further provides a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the above image processing method is implemented.
  • the system, device, module or unit explained in the above embodiments may be implemented by a computer chip or entity, or by a product with a certain function.
  • a typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game control Desk, tablet computer, wearable device, or any combination of these devices.
  • embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present invention may take the form of computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram may be implemented by computer program instructions.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing device to produce a machine that allows instructions generated by the processor of the computer or other programmable data processing device to be used
  • these computer program instructions can also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce a manufactured product including an instruction device,
  • the instruction device implements the functions specified in one block or multiple blocks of one flow or multiple blocks in the flowchart and/or block diagram.
  • These computer program instructions can also be loaded into a computer or other programmable data processing device so that a series of operating steps are performed on the computer or other programmable device to generate computer-implemented processing, thereby executing instructions on the computer or other programmable device Provides steps for implementing the functions specified in the flowchart flow one flow or flows and/or the block diagram one block or multiple blocks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention porte sur un procédé de traitement d'images, un appareil et un support d'informations lisible par ordinateur. Ledit procédé consiste : à acquérir, au moyen d'un dispositif de photographie, une image de vue en plan contenant un objet cible ; à déterminer un plan spatial correspondant à l'objet cible ; à déterminer la position relative du plan spatial et du dispositif de photographie ; et à convertir, en fonction de la position relative, l'image de vue en plan en une image de vue de dessus. L'application des modes de réalisation de la présente invention permet d'améliorer la précision de la détection d'une ligne de voie, et d'établir avec précision une relation entre la ligne de voie et la position réelle d'un véhicule.
PCT/CN2018/124726 2018-12-28 2018-12-28 Procédé de traitement d'images, appareil et support d'informations lisible par ordinateur WO2020133172A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880068957.6A CN111279354A (zh) 2018-12-28 2018-12-28 图像处理方法、设备及计算机可读存储介质
PCT/CN2018/124726 WO2020133172A1 (fr) 2018-12-28 2018-12-28 Procédé de traitement d'images, appareil et support d'informations lisible par ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124726 WO2020133172A1 (fr) 2018-12-28 2018-12-28 Procédé de traitement d'images, appareil et support d'informations lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2020133172A1 true WO2020133172A1 (fr) 2020-07-02

Family

ID=70999738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124726 WO2020133172A1 (fr) 2018-12-28 2018-12-28 Procédé de traitement d'images, appareil et support d'informations lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN111279354A (fr)
WO (1) WO2020133172A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298868A (zh) * 2021-03-17 2021-08-24 阿里巴巴新加坡控股有限公司 模型建立方法、装置、电子设备、介质及程序产品
CN113450597A (zh) * 2021-06-09 2021-09-28 浙江兆晟科技股份有限公司 基于深度学习的船舶辅助航行方法及系统
CN113592940A (zh) * 2021-07-28 2021-11-02 北京地平线信息技术有限公司 基于图像确定目标物位置的方法及装置
CN114531580A (zh) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 图像处理方法及装置
CN115063490A (zh) * 2022-06-30 2022-09-16 阿波罗智能技术(北京)有限公司 车辆相机外参标定方法、装置、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111959397B (zh) * 2020-08-24 2023-03-31 北京茵沃汽车科技有限公司 在全景影像中显示车底图像的方法、系统、装置及介质
CN112489113B (zh) * 2020-11-25 2024-06-11 深圳地平线机器人科技有限公司 相机外参标定方法、装置及相机外参标定系统
CN112990099B (zh) * 2021-04-14 2021-11-30 北京三快在线科技有限公司 一种车道线检测的方法以及装置
CN116993637B (zh) * 2023-07-14 2024-03-12 禾多科技(北京)有限公司 用于车道线检测的图像数据处理方法、装置、设备和介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727756A (zh) * 2008-10-16 2010-06-09 财团法人工业技术研究院 交通工具移动图像辅助导引方法与系统
US20170003134A1 (en) * 2015-06-30 2017-01-05 Lg Electronics Inc. Advanced Driver Assistance Apparatus, Display Apparatus For Vehicle And Vehicle

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036275B (zh) * 2014-05-22 2017-11-28 东软集团股份有限公司 一种车辆盲区内目标对象的检测方法及其装置
CN105447850B (zh) * 2015-11-12 2018-02-09 浙江大学 一种基于多视点图像的全景图拼接合成方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727756A (zh) * 2008-10-16 2010-06-09 财团法人工业技术研究院 交通工具移动图像辅助导引方法与系统
US20170003134A1 (en) * 2015-06-30 2017-01-05 Lg Electronics Inc. Advanced Driver Assistance Apparatus, Display Apparatus For Vehicle And Vehicle

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114531580A (zh) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 图像处理方法及装置
CN114531580B (zh) * 2020-11-23 2023-11-21 北京四维图新科技股份有限公司 图像处理方法及装置
CN113298868A (zh) * 2021-03-17 2021-08-24 阿里巴巴新加坡控股有限公司 模型建立方法、装置、电子设备、介质及程序产品
CN113298868B (zh) * 2021-03-17 2024-04-05 阿里巴巴创新公司 模型建立方法、装置、电子设备、介质及程序产品
CN113450597A (zh) * 2021-06-09 2021-09-28 浙江兆晟科技股份有限公司 基于深度学习的船舶辅助航行方法及系统
CN113450597B (zh) * 2021-06-09 2022-11-29 浙江兆晟科技股份有限公司 基于深度学习的船舶辅助航行方法及系统
CN113592940A (zh) * 2021-07-28 2021-11-02 北京地平线信息技术有限公司 基于图像确定目标物位置的方法及装置
CN115063490A (zh) * 2022-06-30 2022-09-16 阿波罗智能技术(北京)有限公司 车辆相机外参标定方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN111279354A (zh) 2020-06-12

Similar Documents

Publication Publication Date Title
WO2020133172A1 (fr) Procédé de traitement d'images, appareil et support d'informations lisible par ordinateur
US20230360260A1 (en) Method and device to determine the camera position and angle
US10268201B2 (en) Vehicle automated parking system and method
CN108805934B (zh) 一种车载摄像机的外部参数标定方法及装置
JP6830140B2 (ja) 運動ベクトル場の決定方法、運動ベクトル場の決定装置、機器、コンピュータ読み取り可能な記憶媒体及び車両
CN111263960B (zh) 用于更新高清晰度地图的设备和方法
CN106814753B (zh) 一种目标位置矫正方法、装置及系统
CN110411457B (zh) 基于行程感知与视觉融合的定位方法、系统、终端和存储介质
US11062475B2 (en) Location estimating apparatus and method, learning apparatus and method, and computer program products
CN112444242A (zh) 一种位姿优化方法及装置
JP4619962B2 (ja) 路面標示計測システム、白線モデル計測システムおよび白線モデル計測装置
WO2019104571A1 (fr) Procédé et dispositif de traitement d'image
KR101880185B1 (ko) 이동체 포즈 추정을 위한 전자 장치 및 그의 이동체 포즈 추정 방법
JP2020064056A (ja) 位置推定装置及び方法
KR102006291B1 (ko) 전자 장치의 이동체 포즈 추정 방법
CN110458885B (zh) 基于行程感知与视觉融合的定位系统和移动终端
CN108603933A (zh) 用于融合具有不同分辨率的传感器输出的系统和方法
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
WO2021258251A1 (fr) Procédé de surveillance et de cartographie de plateforme mobile, plateforme mobile et support de stockage
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
WO2020019175A1 (fr) Procédé et dispositif de traitement d'image et dispositif photographique et véhicule aérien sans pilote
CN114777768A (zh) 一种卫星拒止环境高精度定位方法、系统及电子设备
WO2021143664A1 (fr) Procédé et appareil de mesure d'une distance d'un objet cible dans un véhicule, et véhicule associé
JP2017211307A (ja) 測定装置、測定方法およびプログラム
CN109658507A (zh) 信息处理方法及装置、电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18945177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18945177

Country of ref document: EP

Kind code of ref document: A1