WO2020133172A1 - Image processing method, apparatus, and computer readable storage medium - Google Patents

Image processing method, apparatus, and computer readable storage medium Download PDF

Info

Publication number
WO2020133172A1
WO2020133172A1 PCT/CN2018/124726 CN2018124726W WO2020133172A1 WO 2020133172 A1 WO2020133172 A1 WO 2020133172A1 CN 2018124726 W CN2018124726 W CN 2018124726W WO 2020133172 A1 WO2020133172 A1 WO 2020133172A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matrix
head
rotation
target
Prior art date
Application number
PCT/CN2018/124726
Other languages
French (fr)
Chinese (zh)
Inventor
崔健
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880068957.6A priority Critical patent/CN111279354B/en
Priority to PCT/CN2018/124726 priority patent/WO2020133172A1/en
Publication of WO2020133172A1 publication Critical patent/WO2020133172A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Definitions

  • Embodiments of the present invention relate to the field of image processing technology, and in particular, to an image processing method, device, and computer-readable storage medium.
  • lane line algorithms play an important role.
  • the accuracy of lane line algorithms will directly affect the performance and reliability of the system.
  • the lane line algorithm is automatic driving An important prerequisite for car control.
  • the lane line algorithm is divided into two levels, one is the detection of the lane line, and the other is the positioning of the lane line, which is to calculate the actual positional relationship between the lane line and the car.
  • the traditional lane line detection algorithm can collect the head-up image through the shooting device, and use the head-up image to detect the lane line.
  • the traditional lane line positioning algorithm can collect the head-up image through the shooting device, and use the head-up image to locate the lane line.
  • the detection result is inaccurate.
  • the size and nature of the lane line in the head-up image are all through perspective projection, which has the effect of "near big and far small", resulting in distant Some pavement markers are distorted in shape and cannot be detected correctly.
  • the positioning result is not accurate.
  • the shape and size of the road surface marker in the head-up image are coupled with the positional relationship between the internal parameters of the camera, the camera and the road surface, It is impossible to directly know the actual position of the lane line by looking directly at the position in the image.
  • the invention provides an image processing method, device and computer-readable storage medium, which can improve the accuracy of detection of lane lines and accurately locate the actual positional relationship between lane lines and vehicles.
  • a driving assistance device including at least one photographing device, a processor, and a memory; the driving assistance device is provided on a vehicle and communicates with the vehicle; the memory, For storing computer instructions executable by the processor;
  • the photographing device is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • a vehicle equipped with a driving assistance system includes at least one camera, a processor, and a memory.
  • the memory is used to store computer instructions executable by the processor.
  • the shooting device is used to collect a head-up image containing a target object, and send the head-up image containing a target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • an image processing method is provided, which is applied to a driving assistance system.
  • the driving assistance system includes at least one photographing device.
  • the method includes:
  • the head-up image is converted into a top-down image according to the relative posture.
  • a computer-readable storage medium is provided.
  • Computer instructions are stored on the computer-readable storage medium. When the computer instructions are executed, the above method is implemented.
  • the accuracy of the lane line detection can be improved, and the actual positional relationship between the lane line and the vehicle can be accurately located.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image can be used to detect the lane line, thereby improving the accuracy of the lane line detection result.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image is used to locate the lane line, thereby improving the accuracy of the lane line positioning result and accurately knowing the actual position of the lane line.
  • FIG. 1 is a schematic diagram of an example of an image processing method in an embodiment
  • FIG. 2 is a schematic diagram of an example of an image processing method in another embodiment
  • FIG. 3 is a schematic diagram of an example of an image processing method in another embodiment
  • 4A is a schematic diagram of a head-up image and a top-down image of an image processing method in an embodiment
  • 4B is a schematic diagram of the relationship between the target object, the space plane and the camera in an embodiment
  • FIG. 5 is a block diagram of an example of a driving assistance device in an embodiment.
  • first, second, third, etc. may be used to describe various information in the present invention, the information should not be limited to these terms. These terms are used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” can be interpreted as "when", or "when”, or "in response to a determination”.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system may be mounted on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.), or the driving assistance system may also be mounted on driving assistance equipment (such as ADAS equipment, etc.), and the driving assistance equipment It is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • a mobile platform such as unmanned vehicles, ordinary vehicles, etc.
  • driving assistance equipment such as ADAS equipment, etc.
  • the driving assistance equipment It is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • the above is only an example of two application scenarios, and the driving assistance system can also be carried on other vehicles, which is not limited.
  • the method may include:
  • Step 101 Obtain a head-up image containing a target object through a camera.
  • the at least one shooting device is installed on the mobile platform, and at least one of the front, rear, left, or right of the mobile platform can be acquired through the shooting device
  • the head-up image of the direction, the head-up image contains the target object.
  • the at least one imaging device is provided in the driving assistance device, and at least one of the front, rear, left, or right directions of the driving assistance device can be acquired through the imaging device
  • the head-up image contains the target object.
  • Step 102 Determine a space plane corresponding to the target object.
  • the first posture information of the mobile platform (that is, the current posture information of the mobile platform) may be acquired, and the space plane may be determined according to the first posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • the second posture information of the driving assistance device (that is, the current posture information of the driving assistance device) may be acquired, and the space plane may be determined according to the second posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • Step 103 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the shooting device relative to the space plane (such as road surface or ground), and can also be understood as the external parameter (that is, positional relationship) of the shooting device relative to the space plane .
  • the relative posture may include, but is not limited to: a pitch angle of the camera relative to the space plane, a roll angle of the camera relative to the space plane, and the camera relative to space The yaw of the plane, the height of the camera relative to the space plane, and the translation parameter of the camera relative to the space plane.
  • Step 104 Convert the head-up image to the top-down image according to the relative posture.
  • the projection matrix corresponding to the head-up image can be obtained according to the relative pose; for example, the target rotation matrix can be determined according to the relative pose, the target rotation parameter can be obtained according to the target rotation matrix, and the relative pose and the target rotation parameter can be obtained The projection matrix corresponding to the head-up image. Then, the head-up image can be converted into a bird's-eye view image according to the projection matrix.
  • the relative attitude includes the rotation angle of the camera on the pitch axis (that is, the pitch angle of the camera relative to the space plane), the rotation angle on the roll axis (that is, the roll angle of the camera relative to the space plane), and The rotation angle of the yaw axis (that is, the yaw angle of the camera relative to the space plane); based on this, the target rotation matrix is determined according to the relative attitude, which may include, but is not limited to: determining the first angle according to the rotation angle of the camera on the pitch axis A rotation matrix; determine the second rotation matrix according to the rotation angle of the camera on the roll axis; determine the third rotation matrix according to the rotation angle of the camera on the yaw axis; according to the first rotation matrix, the second rotation matrix, and the third rotation The matrix determines the target rotation matrix.
  • the target rotation matrix may include three column vectors, and the target rotation parameters may be obtained according to the target rotation matrix, which may include but not limited to: determine the first column vector in the target rotation matrix as the first rotation parameter, and determine The second column vector in the target rotation matrix is determined as the second rotation parameter; the first rotation parameter and the second rotation parameter are determined as the target rotation parameter.
  • the relative posture also includes a translation parameter of the space plane and the shooting device (that is, a translation parameter of the shooting device relative to the space plane), and obtaining a projection matrix according to the relative posture and the target rotation parameter may include but not limited to: The target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device are obtained, and the projection matrix is obtained.
  • converting the head-up image into a bird's-eye view image according to the projection matrix may include, but is not limited to: for each first pixel in the head-up image, the The position information is converted into position information of the second pixel in the overhead image; based on this, the overhead image can be obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • the lane line can be detected based on the bird's-eye image.
  • the lane line can be positioned according to the bird's-eye view image.
  • the lane line detection can be performed based on the top view image (not the lane line detection based on the head-up image) to improve the accuracy of the lane line detection.
  • the lane line positioning is performed based on the top view image (not the lane line positioning based on the head-up image) to improve the accuracy of the lane line positioning.
  • the accuracy of the lane line detection can be improved, and the actual positional relationship between the lane line and the vehicle can be accurately located.
  • the head-up image can be converted into a top-down image, and the top-down image can be used to detect the lane line, thereby improving the accuracy of the lane-line detection result.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image is used to locate the lane line, thereby improving the accuracy of the lane line positioning result and accurately knowing the actual position of the lane line.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system can be mounted on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • a mobile platform such as unmanned vehicles, ordinary vehicles, etc.
  • the driving assistance system can also be mounted on other vehicles. limit.
  • the method may include:
  • Step 201 Obtain a head-up image containing a target object through a camera.
  • the head-up image in at least one direction of the front, back, left, or right direction of the mobile platform may be acquired by the shooting device, and the head-up image includes a target object.
  • Step 202 Determine the space plane corresponding to the target object according to the first pose information of the mobile platform.
  • the first posture information of the mobile platform may be obtained, and the space plane may be determined according to the first posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • the process of acquiring the first posture information of the mobile platform may include a posture sensor, the posture sensor collects the first posture information of the mobile platform, and provides the first posture information to the driving assistance system to enable driving assistance The system acquires the first posture information of the mobile platform.
  • the first posture information of the mobile platform can also be obtained in other ways, which is not limited.
  • the attitude sensor is a high-performance three-dimensional motion attitude measurement system, which can include three-axis gyroscope, three-axis accelerometer (ie IMU), three-axis electronic compass and other auxiliary motion sensors, and output calibration through the embedded processor Sensor data such as angular velocity, acceleration, magnetic data, etc., and then, posture information can be measured based on the sensor data, and there is no restriction on the manner of acquiring posture information.
  • the process of determining the space plane corresponding to the target object according to the first pose information, after obtaining the first pose information of the mobile platform, the space plane can be determined according to the first pose information. I will not repeat them here.
  • Step 203 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the camera relative to the space plane, and can also be understood as the external parameter (ie, positional relationship) of the camera relative to the space plane.
  • the relative attitude may include but is not limited to: the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, the yaw angle of the camera relative to the space plane, and the height of the camera relative to the space plane , The translation of the camera relative to the space plane.
  • Step 204 Acquire a projection matrix corresponding to the head-up image according to the relative posture.
  • a target rotation matrix may be determined according to the relative pose
  • a target rotation parameter may be obtained according to the target rotation matrix
  • a projection matrix corresponding to the head-up image may be obtained according to the relative pose and the target rotation parameter.
  • Step 205 Convert the head-up image to the top-down image according to the projection matrix.
  • the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix; based on this, it can be
  • the top view image is obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system can also be equipped with driving assistance equipment (such as ADAS equipment, etc.), and the driving assistance equipment is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.), of course, the above is only the application of the present invention
  • the driving assistance system can also be mounted on other vehicles, and there is no restriction on this.
  • the method may include:
  • Step 301 Obtain a head-up image containing a target object through a camera.
  • the head-up image in at least one direction of the front, rear, left, or right direction of the driving assistance device may be acquired by the shooting device, and the head-up image includes a target object.
  • Step 302 Determine a space plane corresponding to the target object according to the second posture information of the driving assistance device.
  • the space plane refers to the target object, that is, the position plane of the road surface or the ground in the world coordinate system.
  • the second posture information of the driving assistance device may be acquired, and the space plane may be determined according to the second posture information.
  • the driving assistance device may include a posture sensor, and this posture sensor is used to collect the second posture information of the driving assistance device and provide the second posture information to the driving assistance system, so that the driving assistance system acquires the second posture of the driving assistance device information.
  • the mobile platform may include an attitude sensor, the attitude sensor collects the first attitude information of the mobile platform, and provides the first attitude information to the driving assistance system.
  • the driving assistance system may use the first attitude information of the mobile platform as the first position of the driving assistance device.
  • the second posture information is the second posture information of the driving assistance device.
  • the second posture information can also be obtained in other ways, which is not limited.
  • Step 303 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the camera relative to the space plane, and can also be understood as the external parameter (ie, positional relationship) of the camera relative to the space plane.
  • the relative attitude may include but is not limited to: the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, the yaw angle of the camera relative to the space plane, and the height of the camera relative to the space plane , The translation of the camera relative to the space plane.
  • Step 304 Obtain a projection matrix corresponding to the head-up image according to the relative posture.
  • a target rotation matrix may be determined according to the relative pose
  • a target rotation parameter may be obtained according to the target rotation matrix
  • a projection matrix corresponding to the head-up image may be obtained according to the relative pose and the target rotation parameter.
  • Step 305 Convert the head-up image to the top-down image according to the projection matrix.
  • the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix; based on this, it can be
  • the top view image is obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • Embodiment 4 A subsequent description will be made by taking a mobile platform as a vehicle and a shooting device as a camera as an example.
  • the traditional lane line algorithm can collect the head-up image through the camera, and use the head-up image to detect and locate the lane line.
  • the image on the left is a schematic diagram of the head-up image.
  • the road surface arrow and the lane line are twisted, and the shape is related to the position of the vehicle.
  • the lane line cannot be correctly performed based on the left head-up image in FIG. 4A. Detection and positioning.
  • the head-up image is converted into a bird's-eye image, and the bird's-eye image is used to detect and locate the lane line.
  • the image on the right is a schematic diagram of a top-down image.
  • the arrows of the road surface markers and the lane lines are restored to true scales.
  • the positions of the points on the road surface directly correspond to the real positions, and the positional relationship between a certain point and the vehicle can be directly obtained. It can meet the requirements of ADAS function and automatic driving function. Obviously, the detection and positioning of the lane line can be correctly performed based on the top view image on the right side of FIG. 4A.
  • the accuracy of road surface marker recognition can be improved, and a method for locating road surface markers (including lane lines) can be provided to assist in positioning.
  • the head-up image in order to convert the head-up image into a top-down image, it can be implemented based on the geometric knowledge of computer vision, that is, convert the head-up image into a top-down image based on homography.
  • the shape of the top-down image depends on the true shape of the head-up image of the space plane, the internal parameters of the camera, and the external parameters of the camera (that is, the camera relative to the space plane Position relationship), therefore, the pixels in the head-up image can be directly mapped to the top-down image according to the internal parameters of the camera and the external parameters of the camera, so as to correspond to the true scale of the spatial plane, improve the accuracy of lane line recognition, and provide Accurate positioning method of lane line.
  • FIG. 4B it is a schematic diagram of the relationship between the target object, the space plane and the camera.
  • the space plane is a plane including the target object, and the plane where the camera is located may be different from the space plane.
  • the target object may be a road (pavement or ground) containing lane lines as shown in the figure
  • the spatial plane may be the plane where the target object is the road surface.
  • the actual shooting screen of the camera is shown in the lower right corner of FIG. 4B, which corresponds to the head-up image on the left side of FIG. 4A.
  • homography can be expressed by the following formula, (u,v) is the pixel in the head-up image, that is, the pixel in the spatial plane, s is the normalization coefficient, M is the camera internal parameter, [r 1 r 2 r 3 t] is the external parameter of the camera to the space plane, that is, the positional relationship, r 1 is a 3*1 column vector, r 2 is a 3*1 column vector, and r 3 is a 3*1 column vector, and r 1 , r 2 and r 3 form a rotation matrix, and t is a column vector of 3*1, which represents the translation of the camera to the object plane, that is, r 1 , r 2 and r 3 form the rotation matrix and the translation t constitutes the camera pair
  • the external parameters of the space plane, (X, Y) are the pixels in the overhead image, that is, the pixels in the image coordinate system.
  • the pixels in the overhead image can be (X, Y, Z), but considering that the target object is in a plane, that is, Z is 0, therefore, the product of r 3 and Z is 0, that is to say After converting the homography formula, r 3 and Z can be eliminated from the formula, and finally the following formula can be obtained.
  • the image processing method in the embodiment of the present invention may include:
  • Step a1 Obtain a head-up image containing a target object through a camera.
  • Each pixel in the head-up image is called a first pixel, and each first pixel can be the above (u, v).
  • Step a2 Determine the space plane corresponding to the target object.
  • the spatial plane refers to the position plane of the target object, that is, the road surface or ground on which it is located in the world coordinate system.
  • Step a3 Determine the relative posture of the space plane and the camera.
  • the relative posture can be the external parameter of the camera relative to the space plane (that is, the positional relationship), such as the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, and the camera relative to the space.
  • Step a4 Determine the target rotation matrix according to the relative posture.
  • a pitch angle of the camera relative to the space plane, a roll angle of the camera relative to the space plane, and a yaw angle of the camera relative to the space plane can be determined.
  • the first rotation matrix R x can be determined according to the following formula based on the camera rotation angle on the pitch axis; the second rotation can be determined based on the camera rotation angle on the roll axis, and the second rotation can be determined based on the following formula Matrix R y ; it can be based on the rotation angle (yaw) of the camera on the yaw axis, and the third rotation matrix R z can be determined according to the following formula.
  • the target rotation matrix R After obtaining the first rotation matrix, the second rotation matrix, and the third rotation matrix, based on the first rotation matrix, the second rotation matrix, and the third rotation matrix, the target rotation matrix R may be determined according to the following formula.
  • Step a5 Obtain the target rotation parameter according to the target rotation matrix.
  • the first column vector in the target rotation matrix R can be determined as the first rotation parameter
  • the second column vector in the target rotation matrix R can be determined as the second rotation parameter
  • the first rotation parameter and the first The second rotation parameter is determined as the target rotation parameter.
  • the first rotation parameter is r 1 in the above formula
  • r 1 is a column vector of 3*1
  • the second rotation parameter is r 2 in the above formula, r 2 is a column vector of 3*1.
  • Step a6 Obtain a projection matrix according to the target rotation parameters r 1 and r 2 , the normalization coefficient, the camera's internal parameter matrix, and the translation parameter t.
  • the projection matrix may be H in the above formula.
  • the normalization coefficient can be s in the above formula
  • the projection matrix H can be determined.
  • the camera's internal parameter matrix M can be In the aforementioned internal reference matrix M, f x , f y can represent the focal length of the camera, c x , c y can represent the position of the camera lens optical axis through the imaging sensor f x , f y , c x , c y is a known value, there is no restriction on this.
  • the head-up image can be converted into a bird's-eye view image according to the projection matrix.
  • the position information of the first pixel can be converted to the position of the second pixel (X, Y) in the bird's-eye view image according to the projection matrix H Information, and obtain a bird's-eye view image according to the position information of each second pixel (X, Y), that is, the second pixel constitutes a bird's-eye view image.
  • H Information the projection matrix
  • an embodiment of the present invention also provides a driving assistance device 50 that includes at least one photographing device 51, a processor 52, and a memory 53; the driving The auxiliary device 50 is provided on the vehicle and communicates with the vehicle; the memory 53 is used to store computer instructions executable by the processor;
  • the shooting device 51 is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor 52;
  • the processor 52 is configured to read computer instructions from the memory 53 to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • the imaging device 51 is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the driving assistance device.
  • the processor 52 determines the space plane corresponding to the target object, it is specifically used to:
  • the spatial plane is determined according to the second posture information.
  • the processor 52 converts the head-up image into a top-down image according to the relative posture, it is specifically used to: obtain a projection matrix corresponding to the head-up image according to the relative posture;
  • the head-up image is converted into a top-down image according to the projection matrix.
  • the processor 52 obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to: determine the target rotation matrix according to the relative posture;
  • the relative attitude includes the rotation angle of the shooting device on the pitch axis, the rotation angle on the roll axis, and the rotation angle on the yaw axis; the processor 52 is specifically used when determining the target rotation matrix according to the relative attitude : Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
  • the target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  • the processor 52 obtains a target rotation parameter according to the target rotation matrix, it is specifically used to:
  • the first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  • the relative posture also includes translation parameters of the spatial plane and the shooting device; the processor 52 is specifically used when acquiring the projection matrix according to the relative posture and the target rotation parameter: according to the target rotation
  • the parameters, the normalization coefficient, the internal parameter matrix of the photographing device, the spatial plane and the translation parameters of the photographing device are used to obtain the projection matrix.
  • the processor 52 converts the head-up image into a top-down image according to the projection matrix, it is specifically used to: for each first pixel in the head-up image, convert the first pixel according to the projection matrix The position information of is converted into the position information of the second pixel in the overhead image;
  • the top view image is obtained according to the position information of each second pixel.
  • the processor 52 converts the position information of the first pixel into the position information of the second pixel in the bird's-eye view image according to the projection matrix, it is specifically used to:
  • an embodiment of the present invention also provides a vehicle equipped with a driving assistance system.
  • the vehicle includes at least one camera, a processor, and a memory.
  • the memory is used to store the processor.
  • Computer instructions executed; the shooting device is used to collect a head-up image containing a target object, and send the head-up image containing the target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • the photographing device is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the vehicle.
  • the processor determines the space plane corresponding to the target object, it is specifically used to: obtain first pose information of the vehicle; and determine the space plane according to the first pose information.
  • the processor converts the head-up image into a bird's-eye view image according to the relative posture, it is specifically used to: obtain a projection matrix corresponding to the head-up image according to the relative posture;
  • the head-up image is converted into a top-down image according to the projection matrix.
  • the processor obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to: determine a target rotation matrix according to the relative posture;
  • the relative attitude includes the rotation angle of the shooting device on the pitch axis, the rotation angle on the roll axis, and the rotation angle on the yaw axis; when the processor determines the target rotation matrix according to the relative attitude, it is specifically used to: Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
  • the target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  • the processor obtains the target rotation parameter according to the target rotation matrix, it is specifically used to:
  • the first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  • the relative posture also includes translation parameters of the spatial plane and the shooting device; the processor is specifically used when acquiring the projection matrix according to the relative posture and the target rotation parameter:
  • the projection matrix is obtained according to the target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device.
  • the processor converts the head-up image into a top-down image according to the projection matrix, it is specifically used to: for each first pixel in the head-up image, convert the first pixel according to the projection matrix.
  • the position information is converted into the position information of the second pixel in the overhead image;
  • the top view image is obtained according to the position information of each second pixel.
  • the processor converts the position information of the first pixel point into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, it is specifically used to:
  • An embodiment of the present invention further provides a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the above image processing method is implemented.
  • the system, device, module or unit explained in the above embodiments may be implemented by a computer chip or entity, or by a product with a certain function.
  • a typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game control Desk, tablet computer, wearable device, or any combination of these devices.
  • embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present invention may take the form of computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram may be implemented by computer program instructions.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing device to produce a machine that allows instructions generated by the processor of the computer or other programmable data processing device to be used
  • these computer program instructions can also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce a manufactured product including an instruction device,
  • the instruction device implements the functions specified in one block or multiple blocks of one flow or multiple blocks in the flowchart and/or block diagram.
  • These computer program instructions can also be loaded into a computer or other programmable data processing device so that a series of operating steps are performed on the computer or other programmable device to generate computer-implemented processing, thereby executing instructions on the computer or other programmable device Provides steps for implementing the functions specified in the flowchart flow one flow or flows and/or the block diagram one block or multiple blocks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

An image processing method, an apparatus, and a computer readable storage medium. Said method comprises: acquiring, by means of a photographing device, a plan view image containing a target object; determining a spatial plane corresponding to the target object; determining the relative posture of the spatial plane and the photographing device; and converting, according to the relative posture, the plan view image into a top view image. The application of the embodiments of the present invention improves the accuracy of the detection of a lane line, and accurately positions a relationship between the lane line and the actual position of a vehicle.

Description

图像处理方法、设备及计算机可读存储介质Image processing method, device and computer readable storage medium 技术领域Technical field
本发明实施例涉及图像处理技术领域,尤其是涉及一种图像处理方法、设备及计算机可读存储介质。Embodiments of the present invention relate to the field of image processing technology, and in particular, to an image processing method, device, and computer-readable storage medium.
背景技术Background technique
在自动驾驶以及ADAS(Advanced Driver Assistance Systems,高级驾驶辅助系统)等领域,车道线算法充当着重要的角色,车道线算法的准确性将直接影响系统的性能和可靠性,车道线算法是自动驾驶控车的重要前提。In areas such as autonomous driving and ADAS (Advanced Driver Assistance Systems), lane line algorithms play an important role. The accuracy of lane line algorithms will directly affect the performance and reliability of the system. The lane line algorithm is automatic driving An important prerequisite for car control.
车道线算法分为两个层面,一是车道线的检测,二是车道线的定位,即计算车道线与车实际的位置关系。传统的车道线检测算法,可以通过拍摄装置采集平视图像,并利用平视图像进行车道线的检测。传统的车道线定位算法,可以通过拍摄装置采集平视图像,并利用平视图像进行车道线的定位。The lane line algorithm is divided into two levels, one is the detection of the lane line, and the other is the positioning of the lane line, which is to calculate the actual positional relationship between the lane line and the car. The traditional lane line detection algorithm can collect the head-up image through the shooting device, and use the head-up image to detect the lane line. The traditional lane line positioning algorithm can collect the head-up image through the shooting device, and use the head-up image to locate the lane line.
在利用平视图像进行车道线的检测时,存在检测结果不准确的问题,例如,平视图像中车道线的大小和性质都是经过透视投影,有“近大远小”的效应,导致远处的有些路面标志物形状扭曲,无法正确检测。在利用平视图像进行车道线的定位时,存在定位结果不准确的问题,例如,路面标志物在平视图像中的形状和大小,与拍摄装置内参、拍摄装置和路面的位置关系,耦合在一起,无法直接通过平视图像中的位置,获知车道线的实际位置。When using the head-up image to detect the lane line, there is a problem that the detection result is inaccurate. For example, the size and nature of the lane line in the head-up image are all through perspective projection, which has the effect of "near big and far small", resulting in distant Some pavement markers are distorted in shape and cannot be detected correctly. When using the head-up image to locate the lane line, there is a problem that the positioning result is not accurate. For example, the shape and size of the road surface marker in the head-up image are coupled with the positional relationship between the internal parameters of the camera, the camera and the road surface, It is impossible to directly know the actual position of the lane line by looking directly at the position in the image.
发明内容Summary of the invention
本发明提供一种图像处理方法、设备及计算机可读存储介质,可以提高车道线的检测的准确性,并准确的定位车道线与车实际的位置关系。The invention provides an image processing method, device and computer-readable storage medium, which can improve the accuracy of detection of lane lines and accurately locate the actual positional relationship between lane lines and vehicles.
本发明第一方面,提供一种驾驶辅助设备,所述驾驶辅助设备包括至少一个拍摄装置、处理器和存储器;所述驾驶辅助设备设置在车辆上,并与所 述车辆通信;所述存储器,用于存储所述处理器可执行的计算机指令;According to a first aspect of the present invention, there is provided a driving assistance device including at least one photographing device, a processor, and a memory; the driving assistance device is provided on a vehicle and communicates with the vehicle; the memory, For storing computer instructions executable by the processor;
所述拍摄装置,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器;The photographing device is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor;
所述处理器,用于从所述存储器读取计算机指令以实现:The processor is configured to read computer instructions from the memory to implement:
从所述拍摄装置获取包含目标物体的平视图像;Acquiring a head-up image containing a target object from the shooting device;
确定与所述目标物体对应的空间平面;Determine the space plane corresponding to the target object;
确定所述空间平面和所述拍摄装置的相对姿态;Determine the relative posture of the space plane and the shooting device;
根据所述相对姿态将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the relative posture.
本发明实施例第二方面,提供一种搭载驾驶辅助系统的车辆,所述车辆包括至少一个拍摄装置、处理器和存储器,所述存储器,用于存储所述处理器可执行的计算机指令;所述拍摄装置,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器;According to a second aspect of the embodiments of the present invention, there is provided a vehicle equipped with a driving assistance system. The vehicle includes at least one camera, a processor, and a memory. The memory is used to store computer instructions executable by the processor. The shooting device is used to collect a head-up image containing a target object, and send the head-up image containing a target object to the processor;
所述处理器,用于从所述存储器读取计算机指令以实现:The processor is configured to read computer instructions from the memory to implement:
从所述拍摄装置获取包含目标物体的平视图像;Acquiring a head-up image containing a target object from the shooting device;
确定与所述目标物体对应的空间平面;Determine the space plane corresponding to the target object;
确定所述空间平面和所述拍摄装置的相对姿态;Determine the relative posture of the space plane and the shooting device;
根据所述相对姿态将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the relative posture.
本发明实施例第三方面,提供一种图像处理方法,应用于驾驶辅助系统,所述驾驶辅助系统包括至少一个拍摄装置,所述方法包括:According to a third aspect of the embodiments of the present invention, an image processing method is provided, which is applied to a driving assistance system. The driving assistance system includes at least one photographing device. The method includes:
通过所述拍摄装置获取包含目标物体的平视图像;Acquiring the head-up image containing the target object through the shooting device;
确定与所述目标物体对应的空间平面;Determine the space plane corresponding to the target object;
确定所述空间平面和所述拍摄装置的相对姿态;Determine the relative posture of the space plane and the shooting device;
根据所述相对姿态将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the relative posture.
本发明实施例第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现上述方法。According to a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided. Computer instructions are stored on the computer-readable storage medium. When the computer instructions are executed, the above method is implemented.
基于上述技术方案,本发明实施例中,可以提高车道线的检测的准确性,并准确的定位车道线与车实际的位置关系。具体的,可以将平视图像转换为 俯视图像,并利用俯视图像进行车道线的检测,从而提高车道线检测结果的准确性。可以将平视图像转换为俯视图像,并利用俯视图像进行车道线的定位,从而提高车道线定位结果的准确性,准确获知车道线的实际位置。Based on the above technical solution, in the embodiments of the present invention, the accuracy of the lane line detection can be improved, and the actual positional relationship between the lane line and the vehicle can be accurately located. Specifically, the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image can be used to detect the lane line, thereby improving the accuracy of the lane line detection result. The head-up image can be converted into a bird's-eye view image, and the bird's-eye view image is used to locate the lane line, thereby improving the accuracy of the lane line positioning result and accurately knowing the actual position of the lane line.
附图说明BRIEF DESCRIPTION
为了更加清楚地说明本发明实施例或者现有技术中的技术方案,下面将对本发明实施例或者现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据本发明实施例的这些附图获得其它的附图。In order to more clearly explain the embodiments of the present invention or the technical solutions in the prior art, the following will briefly introduce the drawings used in the embodiments of the present invention or the description of the prior art. Obviously, the drawings in the following description It is only some of the embodiments described in the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings of the embodiments of the present invention.
图1是一种实施方式中的图像处理方法的实施例示意图;FIG. 1 is a schematic diagram of an example of an image processing method in an embodiment;
图2是另一种实施方式中的图像处理方法的实施例示意图;2 is a schematic diagram of an example of an image processing method in another embodiment;
图3是另一种实施方式中的图像处理方法的实施例示意图;3 is a schematic diagram of an example of an image processing method in another embodiment;
图4A是一种实施方式中图像处理方法的平视图像和俯视图像的示意图;4A is a schematic diagram of a head-up image and a top-down image of an image processing method in an embodiment;
图4B是一种实施方式中目标物体、空间平面和相机的关系示意图;4B is a schematic diagram of the relationship between the target object, the space plane and the camera in an embodiment;
图5是一种实施方式中的驾驶辅助设备的实施例框图。5 is a block diagram of an example of a driving assistance device in an embodiment.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。另外,在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。The technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative work fall within the protection scope of the present invention. In addition, in the case of no conflict, the following embodiments and the features in the embodiments can be combined with each other.
本发明使用的术语仅仅是出于描述特定实施例的目的,而非限制本发明。本发明和权利要求书所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。应当理解,本文中使用的术语“和 /或”是指包含一个或者多个相关联的列出项目的任何或所有可能的组合。The terminology used in the present invention is for the purpose of describing specific embodiments only, and does not limit the present invention. The singular forms "a", "said" and "the" used in the present invention and claims are also intended to include the majority forms unless the context clearly indicates other meanings. It should be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more associated listed items.
尽管在本发明可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语用来将同一类型的信息彼此区分开。例如,在不脱离本发明范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,此外,所使用的词语“如果”可以被解释成为“在……时”,或者“当……时”,或者“响应于确定”。Although the terms first, second, third, etc. may be used to describe various information in the present invention, the information should not be limited to these terms. These terms are used to distinguish the same type of information from each other. For example, without departing from the scope of the present invention, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information. Depending on the context, in addition, the word "if" can be interpreted as "when", or "when", or "in response to a determination".
实施例1:Example 1:
本发明实施例中提出一种图像处理方法,该方法可以应用于驾驶辅助系统,所述驾驶辅助系统可以包括至少一个拍摄装置。其中,所述驾驶辅助系统可以搭载于移动平台(如无人车辆、普通车辆等),或者,所述驾驶辅助系统还可以搭载于驾驶辅助设备(如ADAS设备等),且所述驾驶辅助设备设置于移动平台(如无人车辆、普通车辆等)上,当然,上述只是两个应用场景的示例,驾驶辅助系统还可以搭载于其它载具上,对此不做限制。An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device. Wherein, the driving assistance system may be mounted on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.), or the driving assistance system may also be mounted on driving assistance equipment (such as ADAS equipment, etc.), and the driving assistance equipment It is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.). Of course, the above is only an example of two application scenarios, and the driving assistance system can also be carried on other vehicles, which is not limited.
参见图1所示,为图像处理方法的流程示意图,该方法可以包括:Referring to FIG. 1, which is a schematic flowchart of an image processing method, the method may include:
步骤101,通过拍摄装置获取包含目标物体的平视图像。Step 101: Obtain a head-up image containing a target object through a camera.
具体的,若驾驶辅助系统搭载于移动平台,则所述至少一个拍摄装置设置于移动平台上,可以通过所述拍摄装置获取所述移动平台的前方、后方、左方或右方中的至少一个方向的平视图像,平视图像包含目标物体。Specifically, if the driving assistance system is mounted on a mobile platform, the at least one shooting device is installed on the mobile platform, and at least one of the front, rear, left, or right of the mobile platform can be acquired through the shooting device The head-up image of the direction, the head-up image contains the target object.
若驾驶辅助系统搭载于驾驶辅助设备,则所述至少一个拍摄装置设置于驾驶辅助设备,可以通过所述拍摄装置获取所述驾驶辅助设备的前方、后方、左方或右方中的至少一个方向的平视图像,平视图像包含目标物体。If the driving assistance system is mounted on the driving assistance device, the at least one imaging device is provided in the driving assistance device, and at least one of the front, rear, left, or right directions of the driving assistance device can be acquired through the imaging device The head-up image contains the target object.
步骤102,确定与所述目标物体对应的空间平面。Step 102: Determine a space plane corresponding to the target object.
具体的,若驾驶辅助系统搭载于移动平台,则可以获取所述移动平台的第一姿态信息(即移动平台当前的姿态信息),并根据所述第一姿态信息确定所述空间平面。其中,所述空间平面是指,目标物体(如路面或者地面)在世界坐标系下的位置平面,也就是空间平面在世界坐标系下的位置。Specifically, if the driving assistance system is mounted on a mobile platform, the first posture information of the mobile platform (that is, the current posture information of the mobile platform) may be acquired, and the space plane may be determined according to the first posture information. The space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
若驾驶辅助系统搭载于驾驶辅助设备,则可以获取所述驾驶辅助设备的第二姿态信息(即驾驶辅助设备当前的姿态信息),并根据所述第二姿态信息确定所述空间平面。其中,所述空间平面是指,目标物体(如路面或者地面)在世界坐标系下的位置平面,也就是空间平面在世界坐标系下的位置。If the driving assistance system is mounted on the driving assistance device, the second posture information of the driving assistance device (that is, the current posture information of the driving assistance device) may be acquired, and the space plane may be determined according to the second posture information. The space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
步骤103,确定所述空间平面和所述拍摄装置的相对姿态。Step 103: Determine the relative posture of the space plane and the shooting device.
在一个例子中,所述相对姿态是指,所述拍摄装置相对于空间平面(如路面或者地面)的相对姿态,也可以理解为所述拍摄装置相对于空间平面的外参(即位置关系)。例如,所述相对姿态可以包括但不限于:所述拍摄装置相对于空间平面的俯仰角(pitch),所述拍摄装置相对于空间平面的横滚角(roll),所述拍摄装置相对于空间平面的偏航角(yaw),所述拍摄装置相对于空间平面的高度,所述拍摄装置相对于空间平面的平移参数。In one example, the relative posture refers to the relative posture of the shooting device relative to the space plane (such as road surface or ground), and can also be understood as the external parameter (that is, positional relationship) of the shooting device relative to the space plane . For example, the relative posture may include, but is not limited to: a pitch angle of the camera relative to the space plane, a roll angle of the camera relative to the space plane, and the camera relative to space The yaw of the plane, the height of the camera relative to the space plane, and the translation parameter of the camera relative to the space plane.
步骤104,根据所述相对姿态将平视图像转换为俯视图像。Step 104: Convert the head-up image to the top-down image according to the relative posture.
具体的,可以根据所述相对姿态获取平视图像对应的投影矩阵;例如,可以根据所述相对姿态确定目标旋转矩阵,根据目标旋转矩阵获取目标旋转参数,并根据所述相对姿态和目标旋转参数获取平视图像对应的投影矩阵。然后,可以根据所述投影矩阵将平视图像转换为俯视图像。Specifically, the projection matrix corresponding to the head-up image can be obtained according to the relative pose; for example, the target rotation matrix can be determined according to the relative pose, the target rotation parameter can be obtained according to the target rotation matrix, and the relative pose and the target rotation parameter can be obtained The projection matrix corresponding to the head-up image. Then, the head-up image can be converted into a bird's-eye view image according to the projection matrix.
其中,所述相对姿态包括拍摄装置在俯仰轴的旋转角度(即拍摄装置相对于空间平面的俯仰角)、在横滚轴的旋转角度(即拍摄装置相对于空间平面的横滚角)、在偏航轴的旋转角度(即拍摄装置相对于空间平面的偏航角);基于此,根据所述相对姿态确定目标旋转矩阵,可以包括但不限于:根据拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;根据拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;根据拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。Wherein, the relative attitude includes the rotation angle of the camera on the pitch axis (that is, the pitch angle of the camera relative to the space plane), the rotation angle on the roll axis (that is, the roll angle of the camera relative to the space plane), and The rotation angle of the yaw axis (that is, the yaw angle of the camera relative to the space plane); based on this, the target rotation matrix is determined according to the relative attitude, which may include, but is not limited to: determining the first angle according to the rotation angle of the camera on the pitch axis A rotation matrix; determine the second rotation matrix according to the rotation angle of the camera on the roll axis; determine the third rotation matrix according to the rotation angle of the camera on the yaw axis; according to the first rotation matrix, the second rotation matrix, and the third rotation The matrix determines the target rotation matrix.
其中,目标旋转矩阵可以包括三个列向量,根据目标旋转矩阵获取目标旋转参数,可以包括但不限于:将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数,并将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;将所述第一旋转参数和所述第二旋转参数确定为所述目标旋转参数。The target rotation matrix may include three column vectors, and the target rotation parameters may be obtained according to the target rotation matrix, which may include but not limited to: determine the first column vector in the target rotation matrix as the first rotation parameter, and determine The second column vector in the target rotation matrix is determined as the second rotation parameter; the first rotation parameter and the second rotation parameter are determined as the target rotation parameter.
其中,所述相对姿态还包括空间平面和拍摄装置的平移参数(即拍摄装置相对于空间平面的平移参数),根据所述相对姿态和目标旋转参数获取投影矩阵,可以包括但不限于:根据所述目标旋转参数、归一化系数、拍摄装置的内参矩阵、空间平面和拍摄装置的平移参数,获取所述投影矩阵。Wherein, the relative posture also includes a translation parameter of the space plane and the shooting device (that is, a translation parameter of the shooting device relative to the space plane), and obtaining a projection matrix according to the relative posture and the target rotation parameter may include but not limited to: The target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device are obtained, and the projection matrix is obtained.
在上述实施例中,根据所述投影矩阵将平视图像转换为俯视图像,可以包括但不限于:针对平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;基于此,可以根据每个第二像素点的位置信息获取所述俯视图像。In the above embodiment, converting the head-up image into a bird's-eye view image according to the projection matrix may include, but is not limited to: for each first pixel in the head-up image, the The position information is converted into position information of the second pixel in the overhead image; based on this, the overhead image can be obtained according to the position information of each second pixel.
其中,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,可以包括但不限于:获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,即每个第一像素点对应一个第二像素点。Wherein, the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
在一个例子中,根据所述相对姿态将平视图像转换为俯视图像之后,若所述目标物体为车道线,则可以根据所述俯视图像进行车道线的检测。In one example, after the head-up image is converted into a bird's-eye image according to the relative posture, if the target object is a lane line, the lane line can be detected based on the bird's-eye image.
在一个例子中,根据所述相对姿态将平视图像转换为俯视图像之后,若所述目标物体为车道线,则可以根据所述俯视图像进行车道线的定位。In one example, after converting the head-up image into a bird's-eye view image according to the relative posture, if the target object is a lane line, the lane line can be positioned according to the bird's-eye view image.
综上所述,可以根据俯视图像进行车道线检测(不是根据平视图像进行车道线检测),提高车道线检测的准确性。和/或,根据俯视图像进行车道线定位(不是根据平视图像进行车道线定位),提高车道线的定位的准确性。In summary, the lane line detection can be performed based on the top view image (not the lane line detection based on the head-up image) to improve the accuracy of the lane line detection. And/or, the lane line positioning is performed based on the top view image (not the lane line positioning based on the head-up image) to improve the accuracy of the lane line positioning.
基于上述技术方案,本发明实施例中,可以提高车道线的检测的准确性,并准确的定位车道线与车实际的位置关系。具体的,可以将平视图像转换为俯视图像,并利用俯视图像进行车道线的检测,从而提高车道线检测结果的准确性。可以将平视图像转换为俯视图像,并利用俯视图像进行车道线的定位,从而提高车道线定位结果的准确性,准确获知车道线的实际位置。Based on the above technical solution, in the embodiments of the present invention, the accuracy of the lane line detection can be improved, and the actual positional relationship between the lane line and the vehicle can be accurately located. Specifically, the head-up image can be converted into a top-down image, and the top-down image can be used to detect the lane line, thereby improving the accuracy of the lane-line detection result. The head-up image can be converted into a bird's-eye view image, and the bird's-eye view image is used to locate the lane line, thereby improving the accuracy of the lane line positioning result and accurately knowing the actual position of the lane line.
实施例2:Example 2:
本发明实施例中提出一种图像处理方法,该方法可以应用于驾驶辅助系统,所述驾驶辅助系统可以包括至少一个拍摄装置。其中,所述驾驶辅助系 统可以搭载于移动平台(如无人车辆、普通车辆等),当然,上述只是本发明应用场景的示例,驾驶辅助系统还可以搭载于其它载具上,对此不做限制。An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device. Wherein, the driving assistance system can be mounted on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.). Of course, the above is only an example of the application scenario of the present invention, and the driving assistance system can also be mounted on other vehicles. limit.
参见图2所示,为图像处理方法的流程示意图,该方法可以包括:Referring to FIG. 2, which is a schematic flowchart of an image processing method, the method may include:
步骤201,通过拍摄装置获取包含目标物体的平视图像。Step 201: Obtain a head-up image containing a target object through a camera.
具体的,可以通过所述拍摄装置获取所述移动平台的前方、后方、左方或右方中的至少一个方向的平视图像,且该平视图像包含目标物体。Specifically, the head-up image in at least one direction of the front, back, left, or right direction of the mobile platform may be acquired by the shooting device, and the head-up image includes a target object.
步骤202,根据移动平台的第一姿态信息确定目标物体对应的空间平面。Step 202: Determine the space plane corresponding to the target object according to the first pose information of the mobile platform.
具体的,可以获取所述移动平台的第一姿态信息,并根据所述第一姿态信息确定所述空间平面。其中,所述空间平面是指,目标物体(如路面或者地面)在世界坐标系下的位置平面,也就是空间平面在世界坐标系下的位置。Specifically, the first posture information of the mobile platform may be obtained, and the space plane may be determined according to the first posture information. The space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
在一个例子中,获取移动平台的第一姿态信息的过程,移动平台可以包括姿态传感器,姿态传感器采集移动平台的第一姿态信息,并将第一姿态信息提供给驾驶辅助系统,以使驾驶辅助系统获取移动平台的第一姿态信息。当然,也可以采用其它方式获取移动平台的第一姿态信息,对此不做限制。In one example, the process of acquiring the first posture information of the mobile platform, the mobile platform may include a posture sensor, the posture sensor collects the first posture information of the mobile platform, and provides the first posture information to the driving assistance system to enable driving assistance The system acquires the first posture information of the mobile platform. Of course, the first posture information of the mobile platform can also be obtained in other ways, which is not limited.
其中,姿态传感器是一种高性能三维运动姿态的测量系统,可以包含三轴陀螺仪、三轴加速度计(即IMU),三轴电子罗盘等辅助运动传感器,并通过内嵌的处理器输出校准过的角速度,加速度,磁数据等传感器数据,然后,可以基于传感器数据测量出姿态信息,对此姿态信息获取方式不做限制。Among them, the attitude sensor is a high-performance three-dimensional motion attitude measurement system, which can include three-axis gyroscope, three-axis accelerometer (ie IMU), three-axis electronic compass and other auxiliary motion sensors, and output calibration through the embedded processor Sensor data such as angular velocity, acceleration, magnetic data, etc., and then, posture information can be measured based on the sensor data, and there is no restriction on the manner of acquiring posture information.
在一个例子中,根据第一姿态信息确定目标物体对应的空间平面的过程,在得到移动平台的第一姿态信息后,就可以根据该第一姿态信息确定空间平面,这个过程可以参见传统方式,在此不再赘述。In one example, the process of determining the space plane corresponding to the target object according to the first pose information, after obtaining the first pose information of the mobile platform, the space plane can be determined according to the first pose information. I will not repeat them here.
步骤203,确定所述空间平面和拍摄装置的相对姿态。Step 203: Determine the relative posture of the space plane and the shooting device.
在一个例子中,所述相对姿态是指,拍摄装置相对于空间平面的相对姿态,也可以理解为拍摄装置相对于空间平面的外参(即位置关系)。例如,相对姿态可以包括但不限于:拍摄装置相对于空间平面的俯仰角,拍摄装置相对于空间平面的横滚角,拍摄装置相对于空间平面的偏航角,拍摄装置相对于空间平面的高度,拍摄装置相对于空间平面的平移。In an example, the relative posture refers to the relative posture of the camera relative to the space plane, and can also be understood as the external parameter (ie, positional relationship) of the camera relative to the space plane. For example, the relative attitude may include but is not limited to: the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, the yaw angle of the camera relative to the space plane, and the height of the camera relative to the space plane , The translation of the camera relative to the space plane.
步骤204,根据所述相对姿态获取平视图像对应的投影矩阵。Step 204: Acquire a projection matrix corresponding to the head-up image according to the relative posture.
具体的,可以根据所述相对姿态确定目标旋转矩阵,根据目标旋转矩阵获取目标旋转参数,并根据所述相对姿态和目标旋转参数获取平视图像对应的投影矩阵。针对获取投影矩阵的过程,在后续实施例4中详细介绍。Specifically, a target rotation matrix may be determined according to the relative pose, a target rotation parameter may be obtained according to the target rotation matrix, and a projection matrix corresponding to the head-up image may be obtained according to the relative pose and the target rotation parameter. The process of obtaining the projection matrix will be described in detail in the subsequent embodiment 4.
步骤205,根据所述投影矩阵将平视图像转换为俯视图像。Step 205: Convert the head-up image to the top-down image according to the projection matrix.
具体的,针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;基于此,可以根据每个第二像素点的位置信息获取所述俯视图像。Specifically, for each first pixel in the head-up image, the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix; based on this, it can be The top view image is obtained according to the position information of each second pixel.
其中,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,可以包括但不限于:获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,即每个第一像素点对应一个第二像素点。Wherein, the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
实施例3:Example 3:
本发明实施例中提出一种图像处理方法,该方法可以应用于驾驶辅助系统,所述驾驶辅助系统可以包括至少一个拍摄装置。其中,所述驾驶辅助系统还可以搭载于驾驶辅助设备(如ADAS设备等),且所述驾驶辅助设备设置于移动平台(如无人车辆、普通车辆等)上,当然,上述只是本发明应用场景的示例,驾驶辅助系统还可以搭载于其它载具上,对此不做限制。An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device. Wherein, the driving assistance system can also be equipped with driving assistance equipment (such as ADAS equipment, etc.), and the driving assistance equipment is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.), of course, the above is only the application of the present invention For an example of a scenario, the driving assistance system can also be mounted on other vehicles, and there is no restriction on this.
参见图3所示,为图像处理方法的流程示意图,该方法可以包括:Referring to FIG. 3, which is a schematic flowchart of an image processing method, the method may include:
步骤301,通过拍摄装置获取包含目标物体的平视图像。Step 301: Obtain a head-up image containing a target object through a camera.
具体的,可以通过所述拍摄装置获取所述驾驶辅助设备的前方、后方、左方或右方中的至少一个方向的平视图像,该平视图像包含目标物体。Specifically, the head-up image in at least one direction of the front, rear, left, or right direction of the driving assistance device may be acquired by the shooting device, and the head-up image includes a target object.
步骤302,根据驾驶辅助设备的第二姿态信息确定与目标物体对应的空间平面。空间平面是指目标物体,即路面或者地面在世界坐标系下的位置平面。Step 302: Determine a space plane corresponding to the target object according to the second posture information of the driving assistance device. The space plane refers to the target object, that is, the position plane of the road surface or the ground in the world coordinate system.
具体的,可以获取驾驶辅助设备的第二姿态信息,并根据第二姿态信息确定所述空间平面。其中,驾驶辅助设备可以包括姿态传感器,这个姿态传感器用于采集驾驶辅助设备的第二姿态信息,并将第二姿态信息提供给驾驶 辅助系统,以使驾驶辅助系统获取驾驶辅助设备的第二姿态信息。或者,移动平台可以包括姿态传感器,姿态传感器采集移动平台的第一姿态信息,并将第一姿态信息提供给驾驶辅助系统,驾驶辅助系统可以将移动平台的第一姿态信息作为驾驶辅助设备的第二姿态信息,即得到驾驶辅助设备的第二姿态信息。当然,也可以采用其它方式获取第二姿态信息,对此不做限制。Specifically, the second posture information of the driving assistance device may be acquired, and the space plane may be determined according to the second posture information. Wherein, the driving assistance device may include a posture sensor, and this posture sensor is used to collect the second posture information of the driving assistance device and provide the second posture information to the driving assistance system, so that the driving assistance system acquires the second posture of the driving assistance device information. Alternatively, the mobile platform may include an attitude sensor, the attitude sensor collects the first attitude information of the mobile platform, and provides the first attitude information to the driving assistance system. The driving assistance system may use the first attitude information of the mobile platform as the first position of the driving assistance device. The second posture information is the second posture information of the driving assistance device. Of course, the second posture information can also be obtained in other ways, which is not limited.
步骤303,确定所述空间平面和拍摄装置的相对姿态。Step 303: Determine the relative posture of the space plane and the shooting device.
在一个例子中,所述相对姿态是指,拍摄装置相对于空间平面的相对姿态,也可以理解为拍摄装置相对于空间平面的外参(即位置关系)。例如,相对姿态可以包括但不限于:拍摄装置相对于空间平面的俯仰角,拍摄装置相对于空间平面的横滚角,拍摄装置相对于空间平面的偏航角,拍摄装置相对于空间平面的高度,拍摄装置相对于空间平面的平移。In an example, the relative posture refers to the relative posture of the camera relative to the space plane, and can also be understood as the external parameter (ie, positional relationship) of the camera relative to the space plane. For example, the relative attitude may include but is not limited to: the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, the yaw angle of the camera relative to the space plane, and the height of the camera relative to the space plane , The translation of the camera relative to the space plane.
步骤304,根据所述相对姿态获取平视图像对应的投影矩阵。Step 304: Obtain a projection matrix corresponding to the head-up image according to the relative posture.
具体的,可以根据所述相对姿态确定目标旋转矩阵,根据目标旋转矩阵获取目标旋转参数,并根据所述相对姿态和目标旋转参数获取平视图像对应的投影矩阵。针对获取投影矩阵的过程,在后续实施例4中详细介绍。Specifically, a target rotation matrix may be determined according to the relative pose, a target rotation parameter may be obtained according to the target rotation matrix, and a projection matrix corresponding to the head-up image may be obtained according to the relative pose and the target rotation parameter. The process of obtaining the projection matrix will be described in detail in the subsequent embodiment 4.
步骤305,根据所述投影矩阵将平视图像转换为俯视图像。Step 305: Convert the head-up image to the top-down image according to the projection matrix.
具体的,针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;基于此,可以根据每个第二像素点的位置信息获取所述俯视图像。Specifically, for each first pixel in the head-up image, the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix; based on this, it can be The top view image is obtained according to the position information of each second pixel.
其中,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,可以包括但不限于:获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,即每个第一像素点对应一个第二像素点。Wherein, the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
实施例4:以移动平台是车辆,拍摄装置是相机为例进行后续说明。Embodiment 4: A subsequent description will be made by taking a mobile platform as a vehicle and a shooting device as a camera as an example.
传统的车道线算法,可以通过相机采集平视图像,并利用平视图像进行车道线的检测与定位。参见图4A所示,左侧图像为平视图像的示意图,路面标志物箭头和车道线都是经过扭曲的,形状和车辆的位置有关,显然,基于 图4A左侧平视图像无法正确进行车道线的检测与定位。与上述方式不同的是,本实施例中,将平视图像转换为俯视图像,并利用俯视图像进行车道线的检测与定位。参见图4A所示,右侧图像为俯视图像的示意图,路面标志物箭头和车道线都被还原成真实尺度,路面上的点的位置直接对应真实位置,可以直接得到某一点和车辆的位置关系,可以满足ADAS功能和自动驾驶功能的需求,显然,基于图4A右侧俯视图像能够正确进行车道线的检测与定位。The traditional lane line algorithm can collect the head-up image through the camera, and use the head-up image to detect and locate the lane line. As shown in FIG. 4A, the image on the left is a schematic diagram of the head-up image. The road surface arrow and the lane line are twisted, and the shape is related to the position of the vehicle. Obviously, the lane line cannot be correctly performed based on the left head-up image in FIG. 4A. Detection and positioning. Different from the above method, in this embodiment, the head-up image is converted into a bird's-eye image, and the bird's-eye image is used to detect and locate the lane line. As shown in FIG. 4A, the image on the right is a schematic diagram of a top-down image. The arrows of the road surface markers and the lane lines are restored to true scales. The positions of the points on the road surface directly correspond to the real positions, and the positional relationship between a certain point and the vehicle can be directly obtained. It can meet the requirements of ADAS function and automatic driving function. Obviously, the detection and positioning of the lane line can be correctly performed based on the top view image on the right side of FIG. 4A.
而且,通过将平视图像转换为俯视图像,能够提高路面标志物识别的准确率,并提供一种路面标志物(包括车道线)的定位方法,从而辅助定位。Moreover, by converting the head-up image into a bird's-eye view image, the accuracy of road surface marker recognition can be improved, and a method for locating road surface markers (including lane lines) can be provided to assist in positioning.
在一个例子中,为了将平视图像转换为俯视图像,可以基于计算机视觉的几何知识来实现,即基于单应性(Homography)将平视图像转换为俯视图像。具体的,假设平视图像是空间平面的图像,俯视图像是图像平面,则俯视图像的形状,取决于空间平面的平视图像的真实形状、相机的内参、相机的外参(即相机相对于空间平面的位置关系),因此,可以根据相机的内参和相机的外参,将平视图像中的像素直接映射到俯视图像,从而与空间平面的真实尺度对应起来,提高车道线识别的准确性,并提供车道线的准确定位手段。In one example, in order to convert the head-up image into a top-down image, it can be implemented based on the geometric knowledge of computer vision, that is, convert the head-up image into a top-down image based on homography. Specifically, assuming that the head-up image is an image of a space plane and the top-down image is an image plane, the shape of the top-down image depends on the true shape of the head-up image of the space plane, the internal parameters of the camera, and the external parameters of the camera (that is, the camera relative to the space plane Position relationship), therefore, the pixels in the head-up image can be directly mapped to the top-down image according to the internal parameters of the camera and the external parameters of the camera, so as to correspond to the true scale of the spatial plane, improve the accuracy of lane line recognition, and provide Accurate positioning method of lane line.
参见图4B所示,为目标物体、空间平面和相机的关系示意图,空间平面是包括目标物体的平面,相机所在的平面可以与空间平面不同。例如,目标物体可以是图中所示的包含车道线的道路(路面或地面),而空间平面可以是目标物体即路面所在的平面。相机实际拍摄画面如图4B右下角所示,即与图4A左侧平视图像对应。Referring to FIG. 4B, it is a schematic diagram of the relationship between the target object, the space plane and the camera. The space plane is a plane including the target object, and the plane where the camera is located may be different from the space plane. For example, the target object may be a road (pavement or ground) containing lane lines as shown in the figure, and the spatial plane may be the plane where the target object is the road surface. The actual shooting screen of the camera is shown in the lower right corner of FIG. 4B, which corresponds to the head-up image on the left side of FIG. 4A.
在一个例子中,单应性可以通过如下公式表示,(u,v)是平视图像中的像素点,即空间平面中的像素点,s为归一化系数,M为相机内参,[r 1 r 2 r 3 t]是相机对空间平面的外参,即位置关系,r 1为3*1的列向量,r 2为3*1的列向量,r 3为3*1的列向量,且r 1、r 2和r 3构成旋转矩阵,t为3*1的列向量,表示相机到物体平面的平移,即,r 1、r 2和r 3构成旋转矩阵与平移t就构成了相机对空间平面的外参,(X,Y)是俯视图像中的像素点,即图像坐标系中的像素点。 In one example, homography can be expressed by the following formula, (u,v) is the pixel in the head-up image, that is, the pixel in the spatial plane, s is the normalization coefficient, M is the camera internal parameter, [r 1 r 2 r 3 t] is the external parameter of the camera to the space plane, that is, the positional relationship, r 1 is a 3*1 column vector, r 2 is a 3*1 column vector, and r 3 is a 3*1 column vector, and r 1 , r 2 and r 3 form a rotation matrix, and t is a column vector of 3*1, which represents the translation of the camera to the object plane, that is, r 1 , r 2 and r 3 form the rotation matrix and the translation t constitutes the camera pair The external parameters of the space plane, (X, Y) are the pixels in the overhead image, that is, the pixels in the image coordinate system.
Figure PCTCN2018124726-appb-000001
Figure PCTCN2018124726-appb-000001
在上述公式中,俯视图像中的像素点可以为(X,Y,Z),但是,考虑到目标物体在一个平面,即Z为0,因此,r 3与Z的乘积为0,也就是说,在对单应性的公式进行转换后,可以从公式中消除r 3与Z,最终可以得到如下公式。 In the above formula, the pixels in the overhead image can be (X, Y, Z), but considering that the target object is in a plane, that is, Z is 0, therefore, the product of r 3 and Z is 0, that is to say After converting the homography formula, r 3 and Z can be eliminated from the formula, and finally the following formula can be obtained.
Figure PCTCN2018124726-appb-000002
Figure PCTCN2018124726-appb-000002
在上述公式中,假设H=sM[r 1 r 2 t],则上述公式可以转换为如下转换矩阵: In the above formula, assuming H=sM[r 1 r 2 t], the above formula can be converted into the following conversion matrix:
Figure PCTCN2018124726-appb-000003
Figure PCTCN2018124726-appb-000003
进一步的,公式两边同时乘以H的逆矩阵,可以得到如下转换矩阵:Further, by multiplying both sides of the formula by the inverse matrix of H at the same time, the following conversion matrix can be obtained:
Figure PCTCN2018124726-appb-000004
Figure PCTCN2018124726-appb-000004
从上述公式可以看出,在已知H和(u,v)的情况下,就可以得到(X,Y)。It can be seen from the above formula that (X, Y) can be obtained when H and (u, v) are known.
在上述应用场景下,本发明实施例中的图像处理方法可以包括:In the above application scenario, the image processing method in the embodiment of the present invention may include:
步骤a1、通过相机获取包含目标物体的平视图像,该平视图像中的每个像素点称为第一像素点,且每个第一像素点可以为上述(u,v)。Step a1: Obtain a head-up image containing a target object through a camera. Each pixel in the head-up image is called a first pixel, and each first pixel can be the above (u, v).
步骤a2,确定与所述目标物体对应的空间平面。所述空间平面是指,目标物体,即其所在的路面或者地面在世界坐标系下的位置平面。Step a2: Determine the space plane corresponding to the target object. The spatial plane refers to the position plane of the target object, that is, the road surface or ground on which it is located in the world coordinate system.
步骤a3,确定空间平面和相机的相对姿态。Step a3: Determine the relative posture of the space plane and the camera.
其中,相对姿态可以为相机相对于空间平面的外参(即位置关系),如相机相对于空间平面的俯仰角(pitch),相机相对于空间平面的横滚角(roll),相机相对于空间平面的偏航角(yaw),相机相对于空间平面的高度,相机相对于空间平面的平移参数,即上述公式中的t。The relative posture can be the external parameter of the camera relative to the space plane (that is, the positional relationship), such as the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, and the camera relative to the space The yaw angle of the plane, the height of the camera relative to the space plane, and the translation parameter of the camera relative to the space plane, namely t in the above formula.
步骤a4,根据相对姿态确定目标旋转矩阵。Step a4: Determine the target rotation matrix according to the relative posture.
例如,基于上述相对姿态,可以确定相机相对于空间平面的俯仰角 (pitch),相机相对于空间平面的横滚角(roll),相机相对于空间平面的偏航角(yaw)。进一步的,可以基于相机在俯仰轴的旋转角度(pitch),可以根据如下公式确定第一旋转矩阵R x;可以基于相机在横滚轴的旋转角度(roll),可以根据如下公式确定第二旋转矩阵R y;可以基于相机在偏航轴的旋转角度(yaw),可以根据如下公式确定第三旋转矩阵R zFor example, based on the above-mentioned relative attitude, a pitch angle of the camera relative to the space plane, a roll angle of the camera relative to the space plane, and a yaw angle of the camera relative to the space plane can be determined. Further, the first rotation matrix R x can be determined according to the following formula based on the camera rotation angle on the pitch axis; the second rotation can be determined based on the camera rotation angle on the roll axis, and the second rotation can be determined based on the following formula Matrix R y ; it can be based on the rotation angle (yaw) of the camera on the yaw axis, and the third rotation matrix R z can be determined according to the following formula.
Figure PCTCN2018124726-appb-000005
Figure PCTCN2018124726-appb-000005
Figure PCTCN2018124726-appb-000006
Figure PCTCN2018124726-appb-000006
Figure PCTCN2018124726-appb-000007
Figure PCTCN2018124726-appb-000007
在得到第一旋转矩阵、第二旋转矩阵和第三旋转矩阵后,基于第一旋转矩阵、第二旋转矩阵和第三旋转矩阵,可以根据如下公式确定目标旋转矩阵R。After obtaining the first rotation matrix, the second rotation matrix, and the third rotation matrix, based on the first rotation matrix, the second rotation matrix, and the third rotation matrix, the target rotation matrix R may be determined according to the following formula.
Figure PCTCN2018124726-appb-000008
Figure PCTCN2018124726-appb-000008
步骤a5,根据目标旋转矩阵获取目标旋转参数。Step a5: Obtain the target rotation parameter according to the target rotation matrix.
例如,可以将目标旋转矩阵R中的第一个列向量确定为第一旋转参数,并将目标旋转矩阵R中的第二个列向量确定为第二旋转参数,并将第一旋转参数和第二旋转参数确定为目标旋转参数。第一旋转参数为上述公式中的r 1,r 1为3*1的列向量,第二旋转参数为上述公式中的r 2,r 2为3*1的列向量。 For example, the first column vector in the target rotation matrix R can be determined as the first rotation parameter, and the second column vector in the target rotation matrix R can be determined as the second rotation parameter, and the first rotation parameter and the first The second rotation parameter is determined as the target rotation parameter. The first rotation parameter is r 1 in the above formula, r 1 is a column vector of 3*1, and the second rotation parameter is r 2 in the above formula, r 2 is a column vector of 3*1.
步骤a6,根据目标旋转参数r 1和r 2、归一化系数、相机的内参矩阵、平移参数t,获取投影矩阵,该投影矩阵可以为上述公式中的H。 Step a6: Obtain a projection matrix according to the target rotation parameters r 1 and r 2 , the normalization coefficient, the camera's internal parameter matrix, and the translation parameter t. The projection matrix may be H in the above formula.
其中,归一化系数可以为上述公式中的s,相机的内参矩阵可以为上述公式中的M,参见上述公式H=sM[r 1 r 2 t],在目标旋转参数r 1和r 2、归一化系数 s、相机的内参矩阵M、平移参数t均已知的情况下,可以确定投影矩阵H。 Among them, the normalization coefficient can be s in the above formula, the camera's internal parameter matrix can be M in the above formula, see the above formula H = sM [r 1 r 2 t], the target rotation parameters r 1 and r 2 , When the normalization coefficient s, the camera's internal parameter matrix M, and the translation parameter t are all known, the projection matrix H can be determined.
在上述公式中,相机的内参矩阵M可以为
Figure PCTCN2018124726-appb-000009
在上述的内参矩阵M中,f x,f y表征的可以是相机的焦距,c x,c y表征的可以是相机镜头光轴穿过成像传感器的位置f x,f y,c x,c y均为已知值,对此不做限制。
In the above formula, the camera's internal parameter matrix M can be
Figure PCTCN2018124726-appb-000009
In the aforementioned internal reference matrix M, f x , f y can represent the focal length of the camera, c x , c y can represent the position of the camera lens optical axis through the imaging sensor f x , f y , c x , c y is a known value, there is no restriction on this.
步骤a7,可以根据所述投影矩阵将平视图像转换为俯视图像。In step a7, the head-up image can be converted into a bird's-eye view image according to the projection matrix.
具体的,针对平视图像中的每个第一像素点(u,v),可以根据投影矩阵H将第一像素点的位置信息转换为俯视图像中的第二像素点(X,Y)的位置信息,并根据每个第二像素点(X,Y)的位置信息获取俯视图像,即第二像素点组成俯视图像。例如,基于投影矩阵H的逆矩阵,可以参见上述公式将第一像素点(u,v)的位置信息转换为第二像素点(X,Y)的位置信息,在此不再赘述。Specifically, for each first pixel (u, v) in the head-up image, the position information of the first pixel can be converted to the position of the second pixel (X, Y) in the bird's-eye view image according to the projection matrix H Information, and obtain a bird's-eye view image according to the position information of each second pixel (X, Y), that is, the second pixel constitutes a bird's-eye view image. For example, based on the inverse matrix of the projection matrix H, you can refer to the above formula to convert the position information of the first pixel (u, v) to the position information of the second pixel (X, Y), and details are not described herein again.
实施例5:Example 5:
基于与上述方法同样的构思,参见图5所示,本发明实施例中还提供一种驾驶辅助设备50,所述驾驶辅助设备包括至少一个拍摄装置51、处理器52和存储器53;所述驾驶辅助设备50设置在车辆上,并与所述车辆通信;所述存储器53,用于存储所述处理器可执行的计算机指令;Based on the same concept as the above method, referring to FIG. 5, an embodiment of the present invention also provides a driving assistance device 50 that includes at least one photographing device 51, a processor 52, and a memory 53; the driving The auxiliary device 50 is provided on the vehicle and communicates with the vehicle; the memory 53 is used to store computer instructions executable by the processor;
所述拍摄装置51,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器52;The shooting device 51 is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor 52;
所述处理器52,用于从所述存储器53读取计算机指令以实现:The processor 52 is configured to read computer instructions from the memory 53 to implement:
从所述拍摄装置51获取包含目标物体的平视图像;Obtain a head-up image containing the target object from the shooting device 51;
确定与所述目标物体对应的空间平面;Determine the space plane corresponding to the target object;
确定所述空间平面和所述拍摄装置的相对姿态;Determine the relative posture of the space plane and the shooting device;
根据所述相对姿态将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the relative posture.
所述拍摄装置51,用于获取所述驾驶辅助设备的前方、后方、左方或者右方中的至少一个方向的所述平视图像。The imaging device 51 is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the driving assistance device.
所述处理器52确定与所述目标物体对应的空间平面时具体用于:When the processor 52 determines the space plane corresponding to the target object, it is specifically used to:
获取所述驾驶辅助设备的第二姿态信息;Acquiring second posture information of the driving assistance device;
根据所述第二姿态信息确定所述空间平面。The spatial plane is determined according to the second posture information.
所述处理器52根据所述相对姿态将所述平视图像转换为俯视图像时具体用于:根据所述相对姿态获取所述平视图像对应的投影矩阵;When the processor 52 converts the head-up image into a top-down image according to the relative posture, it is specifically used to: obtain a projection matrix corresponding to the head-up image according to the relative posture;
根据所述投影矩阵将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the projection matrix.
所述处理器52根据所述相对姿态获取所述平视图像对应的投影矩阵时具体用于:根据所述相对姿态确定目标旋转矩阵;When the processor 52 obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to: determine the target rotation matrix according to the relative posture;
根据所述目标旋转矩阵获取目标旋转参数;Acquiring target rotation parameters according to the target rotation matrix;
根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。Obtain the projection matrix according to the relative pose and the target rotation parameter.
所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;所述处理器52根据所述相对姿态确定目标旋转矩阵时具体用于:根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;The relative attitude includes the rotation angle of the shooting device on the pitch axis, the rotation angle on the roll axis, and the rotation angle on the yaw axis; the processor 52 is specifically used when determining the target rotation matrix according to the relative attitude : Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;Determine the second rotation matrix according to the rotation angle of the shooting device on the roll axis;
根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;Determine a third rotation matrix according to the rotation angle of the shooting device on the yaw axis;
根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。The target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
所述处理器52根据所述目标旋转矩阵获取目标旋转参数时具体用于:When the processor 52 obtains a target rotation parameter according to the target rotation matrix, it is specifically used to:
将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;Determine the first column vector in the target rotation matrix as the first rotation parameter;
将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;Determine the second column vector in the target rotation matrix as the second rotation parameter;
将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。The first rotation parameter and the second rotation parameter are determined as target rotation parameters.
所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;所述处理器52根据所述相对姿态和所述目标旋转参数获取所述投影矩阵时具体用于:根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。The relative posture also includes translation parameters of the spatial plane and the shooting device; the processor 52 is specifically used when acquiring the projection matrix according to the relative posture and the target rotation parameter: according to the target rotation The parameters, the normalization coefficient, the internal parameter matrix of the photographing device, the spatial plane and the translation parameters of the photographing device are used to obtain the projection matrix.
所述处理器52根据所述投影矩阵将所述平视图像转换为俯视图像时具体用于:针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;When the processor 52 converts the head-up image into a top-down image according to the projection matrix, it is specifically used to: for each first pixel in the head-up image, convert the first pixel according to the projection matrix The position information of is converted into the position information of the second pixel in the overhead image;
根据每个第二像素点的位置信息获取所述俯视图像。The top view image is obtained according to the position information of each second pixel.
所述处理器52根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息时具体用于:When the processor 52 converts the position information of the first pixel into the position information of the second pixel in the bird's-eye view image according to the projection matrix, it is specifically used to:
获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。Acquiring an inverse matrix corresponding to the projection matrix, and converting the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image according to the inverse matrix.
实施例6:Example 6:
基于与上述方法同样的构思,本发明实施例中还提供一种搭载驾驶辅助系统的车辆,所述车辆包括至少一个拍摄装置、处理器和存储器,所述存储器,用于存储所述处理器可执行的计算机指令;所述拍摄装置,用于采集包含目标物体的平视图像,将包含目标物体的所述平视图像发送给所述处理器;Based on the same concept as the above method, an embodiment of the present invention also provides a vehicle equipped with a driving assistance system. The vehicle includes at least one camera, a processor, and a memory. The memory is used to store the processor. Computer instructions executed; the shooting device is used to collect a head-up image containing a target object, and send the head-up image containing the target object to the processor;
所述处理器,用于从所述存储器读取计算机指令以实现:The processor is configured to read computer instructions from the memory to implement:
从所述拍摄装置获取包含目标物体的平视图像;Acquiring a head-up image containing a target object from the shooting device;
确定与所述目标物体对应的空间平面;Determine the space plane corresponding to the target object;
确定所述空间平面和所述拍摄装置的相对姿态;Determine the relative posture of the space plane and the shooting device;
根据所述相对姿态将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the relative posture.
所述拍摄装置,用于获取所述车辆的前方、后方、左方或者右方中的至少一个方向的所述平视图像。The photographing device is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the vehicle.
所述处理器确定与所述目标物体对应的空间平面时具体用于:获取所述车辆的第一姿态信息;根据所述第一姿态信息确定所述空间平面。When the processor determines the space plane corresponding to the target object, it is specifically used to: obtain first pose information of the vehicle; and determine the space plane according to the first pose information.
所述处理器根据所述相对姿态将所述平视图像转换为俯视图像时具体用于:根据所述相对姿态获取所述平视图像对应的投影矩阵;When the processor converts the head-up image into a bird's-eye view image according to the relative posture, it is specifically used to: obtain a projection matrix corresponding to the head-up image according to the relative posture;
根据所述投影矩阵将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the projection matrix.
所述处理器根据所述相对姿态获取所述平视图像对应的投影矩阵时具体用于:根据所述相对姿态确定目标旋转矩阵;When the processor obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to: determine a target rotation matrix according to the relative posture;
根据所述目标旋转矩阵获取目标旋转参数;Acquiring target rotation parameters according to the target rotation matrix;
根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。Obtain the projection matrix according to the relative pose and the target rotation parameter.
所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;所述处理器根据所述相对姿态确定目标旋转矩 阵时具体用于:根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;The relative attitude includes the rotation angle of the shooting device on the pitch axis, the rotation angle on the roll axis, and the rotation angle on the yaw axis; when the processor determines the target rotation matrix according to the relative attitude, it is specifically used to: Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;Determine the second rotation matrix according to the rotation angle of the shooting device on the roll axis;
根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;Determine a third rotation matrix according to the rotation angle of the shooting device on the yaw axis;
根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。The target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
所述处理器根据所述目标旋转矩阵获取目标旋转参数时具体用于:When the processor obtains the target rotation parameter according to the target rotation matrix, it is specifically used to:
将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;Determine the first column vector in the target rotation matrix as the first rotation parameter;
将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;Determine the second column vector in the target rotation matrix as the second rotation parameter;
将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。The first rotation parameter and the second rotation parameter are determined as target rotation parameters.
所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;所述处理器根据所述相对姿态和所述目标旋转参数获取所述投影矩阵时具体用于:The relative posture also includes translation parameters of the spatial plane and the shooting device; the processor is specifically used when acquiring the projection matrix according to the relative posture and the target rotation parameter:
根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。The projection matrix is obtained according to the target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device.
所述处理器根据所述投影矩阵将所述平视图像转换为俯视图像时具体用于:针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;When the processor converts the head-up image into a top-down image according to the projection matrix, it is specifically used to: for each first pixel in the head-up image, convert the first pixel according to the projection matrix. The position information is converted into the position information of the second pixel in the overhead image;
根据每个第二像素点的位置信息获取所述俯视图像。The top view image is obtained according to the position information of each second pixel.
所述处理器根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息时具体用于:When the processor converts the position information of the first pixel point into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, it is specifically used to:
获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。Acquiring an inverse matrix corresponding to the projection matrix, and converting the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image according to the inverse matrix.
实施例7:Example 7:
本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现上述图像处理方法。An embodiment of the present invention further provides a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the above image processing method is implemented.
上述实施例阐明的系统、装置、模块或单元,可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游 戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。The system, device, module or unit explained in the above embodiments may be implemented by a computer chip or entity, or by a product with a certain function. A typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game control Desk, tablet computer, wearable device, or any combination of these devices.
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本发明时可以把各单元的功能在同一个或多个软件和/或硬件中实现。For the convenience of description, when describing the above device, the functions are divided into various units and described separately. Of course, when implementing the present invention, the functions of each unit can be implemented in one or more software and/or hardware.
本领域内的技术人员应明白,本发明实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present invention may take the form of computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present invention. It should be understood that each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram may be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing device to produce a machine that allows instructions generated by the processor of the computer or other programmable data processing device to be used A device for realizing the functions specified in one block or multiple blocks of one flow or multiple flows of a flowchart and/or one block or multiple blocks of a block diagram.
而且,这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。Moreover, these computer program instructions can also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce a manufactured product including an instruction device, The instruction device implements the functions specified in one block or multiple blocks of one flow or multiple blocks in the flowchart and/or block diagram.
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备,使得在计算机或者其它可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded into a computer or other programmable data processing device so that a series of operating steps are performed on the computer or other programmable device to generate computer-implemented processing, thereby executing instructions on the computer or other programmable device Provides steps for implementing the functions specified in the flowchart flow one flow or flows and/or the block diagram one block or multiple blocks.
以上所述仅为本发明实施例而已,并不用于限制本发明。对于本领域技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原理之内所作的任何修改、等同替换、改进,均应包含在本发明的权利要求范围之内。The above are only the embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention shall be included in the scope of the claims of the present invention.

Claims (35)

  1. 一种驾驶辅助设备,其特征在于,所述驾驶辅助设备包括至少一个拍摄装置、处理器和存储器;所述驾驶辅助设备设置在车辆上,并与所述车辆通信;所述存储器,用于存储所述处理器可执行的计算机指令;A driving assistance device, characterized in that the driving assistance device includes at least one photographing device, a processor, and a memory; the driving assistance device is provided on a vehicle and communicates with the vehicle; and the memory is used to store Computer instructions executable by the processor;
    所述拍摄装置,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器;The photographing device is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor;
    所述处理器,用于从所述存储器读取计算机指令以实现:The processor is configured to read computer instructions from the memory to implement:
    从所述拍摄装置获取包含目标物体的平视图像;Acquiring a head-up image containing a target object from the shooting device;
    确定与所述目标物体对应的空间平面;Determine the space plane corresponding to the target object;
    确定所述空间平面和所述拍摄装置的相对姿态;Determine the relative posture of the space plane and the shooting device;
    根据所述相对姿态将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the relative posture.
  2. 根据权利要求1所述的设备,其特征在于,The device according to claim 1, characterized in that
    所述拍摄装置,用于获取所述驾驶辅助设备的前方、后方、左方或者右方中的至少一个方向的所述平视图像。The shooting device is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the driving assistance device.
  3. 根据权利要求1所述的设备,其特征在于,The device according to claim 1, characterized in that
    所述处理器确定与所述目标物体对应的空间平面时具体用于:When the processor determines the space plane corresponding to the target object, it is specifically used for:
    获取所述驾驶辅助设备的第二姿态信息;Acquiring second posture information of the driving assistance device;
    根据所述第二姿态信息确定所述空间平面。The spatial plane is determined according to the second posture information.
  4. 根据权利要求1所述的设备,其特征在于,所述处理器根据所述相对姿态将所述平视图像转换为俯视图像时具体用于:The device according to claim 1, wherein the processor is specifically used when the processor converts the head-up image into a top-down image according to the relative posture:
    根据所述相对姿态获取所述平视图像对应的投影矩阵;Obtaining a projection matrix corresponding to the head-up image according to the relative posture;
    根据所述投影矩阵将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the projection matrix.
  5. 根据权利要求4所述的设备,其特征在于,所述处理器根据所述相对姿态获取所述平视图像对应的投影矩阵时具体用于:The device according to claim 4, wherein the processor is specifically used when acquiring the projection matrix corresponding to the head-up image according to the relative posture:
    根据所述相对姿态确定目标旋转矩阵;Determine the target rotation matrix according to the relative attitude;
    根据所述目标旋转矩阵获取目标旋转参数;Acquiring target rotation parameters according to the target rotation matrix;
    根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。Obtain the projection matrix according to the relative pose and the target rotation parameter.
  6. 根据权利要求5所述的设备,其特征在于,所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;所述处理器根据所述相对姿态确定目标旋转矩阵时具体用于:The apparatus according to claim 5, wherein the relative posture includes a rotation angle of the shooting device on a pitch axis, a rotation angle on a roll axis, and a rotation angle on a yaw axis; When the relative pose determines the target rotation matrix, it is specifically used to:
    根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
    根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;Determine the second rotation matrix according to the rotation angle of the shooting device on the roll axis;
    根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;Determine a third rotation matrix according to the rotation angle of the shooting device on the yaw axis;
    根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。The target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  7. 根据权利要求5所述的设备,其特征在于,The device according to claim 5, characterized in that
    所述处理器根据所述目标旋转矩阵获取目标旋转参数时具体用于:When the processor obtains the target rotation parameter according to the target rotation matrix, it is specifically used to:
    将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;Determine the first column vector in the target rotation matrix as the first rotation parameter;
    将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;Determine the second column vector in the target rotation matrix as the second rotation parameter;
    将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。The first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  8. 根据权利要求5所述的设备,其特征在于,所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;所述处理器根据所述相对姿态和所述目标旋转参数获取所述投影矩阵时具体用于:The apparatus according to claim 5, wherein the relative posture further includes a translation parameter of the space plane and the shooting device; the processor obtains the target according to the relative posture and the target rotation parameter The projection matrix is specifically used for:
    根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。The projection matrix is obtained according to the target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device.
  9. 根据权利要求4所述的设备,其特征在于,所述处理器根据所述投影矩阵将所述平视图像转换为俯视图像时具体用于:The device according to claim 4, wherein the processor is specifically used when converting the head-up image into a bird's-eye image according to the projection matrix:
    针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;For each first pixel in the head-up image, the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix;
    根据每个第二像素点的位置信息获取所述俯视图像。The top view image is obtained according to the position information of each second pixel.
  10. 根据权利要求9所述的设备,其特征在于,The device according to claim 9, characterized in that
    所述处理器根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息时具体用于:When the processor converts the position information of the first pixel point into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, it is specifically used to:
    获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。Acquiring an inverse matrix corresponding to the projection matrix, and converting the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image according to the inverse matrix.
  11. 一种搭载驾驶辅助系统的车辆,其特征在于,所述车辆包括至少一个拍摄装置、处理器和存储器,所述存储器,用于存储所述处理器可执行的计算机指令;所述拍摄装置,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器;A vehicle equipped with a driving assistance system, characterized in that the vehicle includes at least one photographing device, a processor and a memory, the memory is used to store computer instructions executable by the processor; the photographing device is used for To collect a head-up image containing the target object, and send the head-up image containing the target object to the processor;
    所述处理器,用于从所述存储器读取计算机指令以实现:The processor is configured to read computer instructions from the memory to implement:
    从所述拍摄装置获取包含目标物体的平视图像;Acquiring a head-up image containing a target object from the shooting device;
    确定与所述目标物体对应的空间平面;Determine the space plane corresponding to the target object;
    确定所述空间平面和所述拍摄装置的相对姿态;Determine the relative posture of the space plane and the shooting device;
    根据所述相对姿态将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the relative posture.
  12. 根据权利要求11所述的车辆,其特征在于,所述拍摄装置,用于获取所述车辆的前方、后方、左方或者右方中的至少一个方向的所述平视图像。The vehicle according to claim 11, wherein the shooting device is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the vehicle.
  13. 根据权利要求11所述的车辆,其特征在于,The vehicle according to claim 11, characterized in that
    所述处理器确定与所述目标物体对应的空间平面时具体用于:When the processor determines the space plane corresponding to the target object, it is specifically used for:
    获取所述车辆的第一姿态信息;Acquiring the first posture information of the vehicle;
    根据所述第一姿态信息确定所述空间平面。The space plane is determined according to the first posture information.
  14. 根据权利要求11所述的车辆,其特征在于,所述处理器根据所述相对姿态将所述平视图像转换为俯视图像时具体用于:The vehicle according to claim 11, wherein the processor is specifically used when the processor converts the head-up image into a top-down image according to the relative posture:
    根据所述相对姿态获取所述平视图像对应的投影矩阵;Obtaining a projection matrix corresponding to the head-up image according to the relative posture;
    根据所述投影矩阵将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the projection matrix.
  15. 根据权利要求14所述的车辆,其特征在于,所述处理器根据所述相对姿态获取所述平视图像对应的投影矩阵时具体用于:The vehicle according to claim 14, wherein when the processor obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to:
    根据所述相对姿态确定目标旋转矩阵;Determine the target rotation matrix according to the relative attitude;
    根据所述目标旋转矩阵获取目标旋转参数;Acquiring target rotation parameters according to the target rotation matrix;
    根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。Obtain the projection matrix according to the relative pose and the target rotation parameter.
  16. 根据权利要求15所述的车辆,其特征在于,所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;所述处理器根据所述相对姿态确定目标旋转矩阵时具体用于:The vehicle according to claim 15, wherein the relative posture includes a rotation angle of the shooting device on a pitch axis, a rotation angle on a roll axis, and a rotation angle on a yaw axis; When the relative pose determines the target rotation matrix, it is specifically used to:
    根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
    根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;Determine the second rotation matrix according to the rotation angle of the shooting device on the roll axis;
    根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;Determine a third rotation matrix according to the rotation angle of the shooting device on the yaw axis;
    根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。The target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  17. 根据权利要求15所述的车辆,其特征在于,The vehicle according to claim 15, characterized in that
    所述处理器根据所述目标旋转矩阵获取目标旋转参数时具体用于:When the processor obtains the target rotation parameter according to the target rotation matrix, it is specifically used to:
    将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;Determine the first column vector in the target rotation matrix as the first rotation parameter;
    将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;Determine the second column vector in the target rotation matrix as the second rotation parameter;
    将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。The first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  18. 根据权利要求15所述的车辆,其特征在于,所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;所述处理器根据所述相对姿态和所述目标旋转参数获取所述投影矩阵时具体用于:The vehicle according to claim 15, wherein the relative posture further includes a translation parameter of the space plane and the shooting device; the processor acquires the relative posture and the target rotation parameter The projection matrix is specifically used for:
    根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。The projection matrix is obtained according to the target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device.
  19. 根据权利要求14所述的车辆,其特征在于,所述处理器根据所述投影矩阵将所述平视图像转换为俯视图像时具体用于:The vehicle according to claim 14, wherein the processor is specifically used when converting the head-up image into a top-down image according to the projection matrix:
    针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;For each first pixel in the head-up image, the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix;
    根据每个第二像素点的位置信息获取所述俯视图像。The top view image is obtained according to the position information of each second pixel.
  20. 根据权利要求19所述的车辆,其特征在于,The vehicle according to claim 19, wherein
    所述处理器根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息时具体用于:When the processor converts the position information of the first pixel point into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, it is specifically used to:
    获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。Acquiring an inverse matrix corresponding to the projection matrix, and converting the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image according to the inverse matrix.
  21. 一种图像处理方法,其特征在于,应用于驾驶辅助系统,所述驾驶辅助系统包括至少一个拍摄装置,所述方法包括:An image processing method characterized by being applied to a driving assistance system, where the driving assistance system includes at least one photographing device, and the method includes:
    通过所述拍摄装置获取包含目标物体的平视图像;Acquiring the head-up image containing the target object through the shooting device;
    确定与所述目标物体对应的空间平面;Determine the space plane corresponding to the target object;
    确定所述空间平面和所述拍摄装置的相对姿态;Determine the relative posture of the space plane and the shooting device;
    根据所述相对姿态将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the relative posture.
  22. 根据权利要求21所述的方法,其特征在于,The method according to claim 21, characterized in that
    所述驾驶辅助系统搭载于移动平台;The driving assistance system is carried on a mobile platform;
    所述至少一个拍摄装置设置于所述移动平台上,用于获取所述移动平台的前方、后方、左方或右方中的至少一个方向的所述平视图像。The at least one photographing device is provided on the mobile platform and used to acquire the head-up image in at least one direction of the front, rear, left, or right of the mobile platform.
  23. 根据权利要求22所述的方法,其特征在于,The method according to claim 22, wherein
    所述确定与所述目标物体对应的空间平面,包括:The determining the space plane corresponding to the target object includes:
    获取所述移动平台的第一姿态信息;Acquiring the first posture information of the mobile platform;
    根据所述第一姿态信息确定所述空间平面。The space plane is determined according to the first posture information.
  24. 根据权利要求21所述的方法,其特征在于,The method according to claim 21, characterized in that
    所述驾驶辅助系统搭载于驾驶辅助设备;The driving assistance system is carried in driving assistance equipment;
    所述至少一个拍摄装置设置于所述驾驶辅助设备,用于获取所述驾驶辅助设备的前方、后方、左方或右方中的至少一个方向的所述平视图像。The at least one shooting device is provided in the driving assistance device, and is configured to acquire the head-up image in at least one direction of front, rear, left, or right of the driving assistance device.
  25. 根据权利要求24所述的方法,其特征在于,The method according to claim 24, characterized in that
    所述确定与所述目标物体对应的空间平面,还包括:The determining the space plane corresponding to the target object further includes:
    获取所述驾驶辅助设备的第二姿态信息;Acquiring second posture information of the driving assistance device;
    根据所述第二姿态信息确定所述空间平面。The spatial plane is determined according to the second posture information.
  26. 根据权利要求21所述的方法,其特征在于,The method according to claim 21, characterized in that
    所述根据所述相对姿态将所述平视图像转换为俯视图像,包括:The converting the head-up image into a top-down image according to the relative posture includes:
    根据所述相对姿态获取所述平视图像对应的投影矩阵;Obtaining a projection matrix corresponding to the head-up image according to the relative posture;
    根据所述投影矩阵将所述平视图像转换为俯视图像。The head-up image is converted into a top-down image according to the projection matrix.
  27. 根据权利要求26所述的方法,其特征在于,The method of claim 26, wherein
    所述根据所述相对姿态获取所述平视图像对应的投影矩阵,包括:The obtaining the projection matrix corresponding to the head-up image according to the relative posture includes:
    根据所述相对姿态确定目标旋转矩阵;Determine the target rotation matrix according to the relative attitude;
    根据所述目标旋转矩阵获取目标旋转参数;Acquiring target rotation parameters according to the target rotation matrix;
    根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。Obtain the projection matrix according to the relative pose and the target rotation parameter.
  28. 根据权利要求27所述的方法,其特征在于,所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;The method according to claim 27, wherein the relative posture includes a rotation angle of the shooting device on a pitch axis, a rotation angle on a roll axis, and a rotation angle on a yaw axis;
    所述根据所述相对姿态确定目标旋转矩阵,包括:The determining the target rotation matrix according to the relative posture includes:
    根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
    根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;Determine the second rotation matrix according to the rotation angle of the shooting device on the roll axis;
    根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;Determine a third rotation matrix according to the rotation angle of the shooting device on the yaw axis;
    根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。The target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  29. 根据权利要求27所述的方法,其特征在于,The method according to claim 27, characterized in that
    所述根据所述目标旋转矩阵获取目标旋转参数,包括:The acquiring the target rotation parameter according to the target rotation matrix includes:
    将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;Determine the first column vector in the target rotation matrix as the first rotation parameter;
    将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;Determine the second column vector in the target rotation matrix as the second rotation parameter;
    将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。The first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  30. 根据权利要求27所述的方法,其特征在于,The method according to claim 27, characterized in that
    所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;The relative posture also includes translation parameters of the space plane and the shooting device;
    根据所述相对姿态和所述目标旋转参数获取所述投影矩阵,包括:Obtaining the projection matrix according to the relative pose and the target rotation parameter includes:
    根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。The projection matrix is obtained according to the target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device.
  31. 根据权利要求26所述的方法,其特征在于,The method of claim 26, wherein
    所述根据所述投影矩阵将所述平视图像转换为俯视图像,包括:The converting the head-up image into a top-down image according to the projection matrix includes:
    针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;For each first pixel in the head-up image, the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix;
    根据每个第二像素点的位置信息获取所述俯视图像。The top view image is obtained according to the position information of each second pixel.
  32. 根据权利要求31所述的方法,其特征在于,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,包括:The method according to claim 31, wherein converting the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image according to the projection matrix includes:
    获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。Acquiring an inverse matrix corresponding to the projection matrix, and converting the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image according to the inverse matrix.
  33. 根据权利要求21所述的方法,其特征在于,The method according to claim 21, characterized in that
    所述根据所述相对姿态将所述平视图像转换为俯视图像之后,还包括:After converting the head-up image into a top-down image according to the relative posture, the method further includes:
    若所述目标物体为车道线,则根据所述俯视图像进行车道线的检测。If the target object is a lane line, then the lane line is detected according to the overhead image.
  34. 根据权利要求21所述的方法,其特征在于,The method according to claim 21, characterized in that
    所述根据所述相对姿态将所述平视图像转换为俯视图像之后,还包括:After converting the head-up image into a top-down image according to the relative posture, the method further includes:
    若所述目标物体为车道线,则根据所述俯视图像进行车道线的定位。If the target object is a lane line, the lane line is positioned according to the overhead image.
  35. 一种计算机可读存储介质,其特征在于,计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现权利要求21-34所述的方法。A computer-readable storage medium, characterized in that computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the method of claims 21-34 is implemented.
PCT/CN2018/124726 2018-12-28 2018-12-28 Image processing method, apparatus, and computer readable storage medium WO2020133172A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880068957.6A CN111279354B (en) 2018-12-28 Image processing method, apparatus and computer readable storage medium
PCT/CN2018/124726 WO2020133172A1 (en) 2018-12-28 2018-12-28 Image processing method, apparatus, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124726 WO2020133172A1 (en) 2018-12-28 2018-12-28 Image processing method, apparatus, and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020133172A1 true WO2020133172A1 (en) 2020-07-02

Family

ID=70999738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124726 WO2020133172A1 (en) 2018-12-28 2018-12-28 Image processing method, apparatus, and computer readable storage medium

Country Status (1)

Country Link
WO (1) WO2020133172A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298868A (en) * 2021-03-17 2021-08-24 阿里巴巴新加坡控股有限公司 Model building method, model building device, electronic device, medium, and program product
CN113450597A (en) * 2021-06-09 2021-09-28 浙江兆晟科技股份有限公司 Ship auxiliary navigation method and system based on deep learning
CN113592940A (en) * 2021-07-28 2021-11-02 北京地平线信息技术有限公司 Method and device for determining position of target object based on image
CN114531580A (en) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 Image processing method and device
CN115063490A (en) * 2022-06-30 2022-09-16 阿波罗智能技术(北京)有限公司 Vehicle camera external parameter calibration method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727756A (en) * 2008-10-16 2010-06-09 财团法人工业技术研究院 Mobile image-aided guidance method and mobile image-aided guidance system for vehicles
US20170003134A1 (en) * 2015-06-30 2017-01-05 Lg Electronics Inc. Advanced Driver Assistance Apparatus, Display Apparatus For Vehicle And Vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727756A (en) * 2008-10-16 2010-06-09 财团法人工业技术研究院 Mobile image-aided guidance method and mobile image-aided guidance system for vehicles
US20170003134A1 (en) * 2015-06-30 2017-01-05 Lg Electronics Inc. Advanced Driver Assistance Apparatus, Display Apparatus For Vehicle And Vehicle

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114531580A (en) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 Image processing method and device
CN114531580B (en) * 2020-11-23 2023-11-21 北京四维图新科技股份有限公司 Image processing method and device
CN113298868A (en) * 2021-03-17 2021-08-24 阿里巴巴新加坡控股有限公司 Model building method, model building device, electronic device, medium, and program product
CN113298868B (en) * 2021-03-17 2024-04-05 阿里巴巴创新公司 Model building method, device, electronic equipment, medium and program product
CN113450597A (en) * 2021-06-09 2021-09-28 浙江兆晟科技股份有限公司 Ship auxiliary navigation method and system based on deep learning
CN113450597B (en) * 2021-06-09 2022-11-29 浙江兆晟科技股份有限公司 Ship auxiliary navigation method and system based on deep learning
CN113592940A (en) * 2021-07-28 2021-11-02 北京地平线信息技术有限公司 Method and device for determining position of target object based on image
CN115063490A (en) * 2022-06-30 2022-09-16 阿波罗智能技术(北京)有限公司 Vehicle camera external parameter calibration method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111279354A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
WO2020133172A1 (en) Image processing method, apparatus, and computer readable storage medium
US20230360260A1 (en) Method and device to determine the camera position and angle
US10268201B2 (en) Vehicle automated parking system and method
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
JP6830140B2 (en) Motion vector field determination method, motion vector field determination device, equipment, computer readable storage medium and vehicle
CN111263960B (en) Apparatus and method for updating high definition map
CN106814753B (en) Target position correction method, device and system
US11062475B2 (en) Location estimating apparatus and method, learning apparatus and method, and computer program products
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
CN112444242A (en) Pose optimization method and device
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
WO2019104571A1 (en) Image processing method and device
JP2020064056A (en) Device and method for estimating position
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
WO2021143664A1 (en) Method and apparatus for measuring distance of target object in vehicle, and vehicle
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
JP2017211307A (en) Measuring device, measuring method, and program
WO2021258251A1 (en) Surveying and mapping method for movable platform, and movable platform and storage medium
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
WO2020019175A1 (en) Image processing method and apparatus, and photographing device and unmanned aerial vehicle
CN109658507A (en) Information processing method and device, electronic equipment
JP7337617B2 (en) Estimation device, estimation method and program
CN109891188A (en) Mobile platform, camera paths generation method, program and recording medium
CN116952229A (en) Unmanned aerial vehicle positioning method, device, system and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18945177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18945177

Country of ref document: EP

Kind code of ref document: A1