CN117830967A - Image processing method, device and equipment - Google Patents

Image processing method, device and equipment Download PDF

Info

Publication number
CN117830967A
CN117830967A CN202311815249.3A CN202311815249A CN117830967A CN 117830967 A CN117830967 A CN 117830967A CN 202311815249 A CN202311815249 A CN 202311815249A CN 117830967 A CN117830967 A CN 117830967A
Authority
CN
China
Prior art keywords
image
point set
coordinate
coordinate point
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311815249.3A
Other languages
Chinese (zh)
Inventor
周仁杰
巫立峰
李加琛
陆超
王政军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202311815249.3A priority Critical patent/CN117830967A/en
Publication of CN117830967A publication Critical patent/CN117830967A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, device and equipment, and relates to the field of computers. In the method, a second coordinate point set in a first image is adjusted according to a first coordinate point set in the first image, a preset first wheel base and a preset first wheel base; obtaining a first inverse perspective transformation matrix according to the adjusted second coordinate point set and the third coordinate point set in the second image; according to the first inverse perspective transformation matrix, converting each coordinate point in the second image into a coordinate point under a first coordinate system corresponding to the first image, and obtaining a third image; according to the angle relation in the fourth coordinate point set of the third image, the fifth coordinate point set in the third image is adjusted, and according to the adjusted fifth coordinate point set and the third coordinate point set, a second inverse perspective transformation matrix is obtained, and accuracy of the inverse perspective transformation matrix is improved.

Description

Image processing method, device and equipment
Technical Field
The application relates to the technical field of computers and provides an image processing method, device and equipment.
Background
In an intelligent traffic scene, the camera plays an important role in road surface feature detection. However, the camera generally obtains a perspective image, which is understood to be an image in the coordinate system of the camera, which is disadvantageous for road feature extraction. Therefore, it is necessary to convert from a perspective image to a bird's-eye image, which is a digitized image simulating the bird's-eye direction. The perspective image is typically converted to a bird's eye image using an inverse perspective transformation matrix.
One way to determine the inverse perspective transformation matrix is: and determining a plurality of first coordinate points in the perspective image, and a plurality of second coordinate points respectively corresponding to the bird's eye view image and the plurality of first coordinate points, and determining a conversion relation between at least the plurality of first coordinate points and the plurality of coordinate points, namely obtaining the inverse perspective transformation matrix. Since the camera may shake or the like when shooting an image, the determined first coordinate points are inaccurate, and thus the solving of the inverse perspective transformation matrix is also inaccurate.
Disclosure of Invention
The embodiment of the application provides an image processing method, device and equipment, which are used for improving the accuracy of an inverse perspective transformation matrix.
In a first aspect, an embodiment of the present application provides an image processing method, including: according to a first coordinate point set in a first image, a preset first wheel track and a preset first wheel base, a second coordinate point set in the first image is adjusted, the first coordinate point set comprises coordinate points of a target vehicle, which are attached to the ground, in the first image, and the first image is an image of a target scene in a first coordinate system; obtaining a first inverse perspective transformation matrix according to the adjusted second coordinate point set and a third coordinate point set, wherein the coordinate points in the third coordinate point set belong to a second image, the second image is an image of a target scene in a second coordinate system, and the first inverse perspective transformation matrix is used for converting the coordinate points in the second image into the coordinate points in the first image; converting each coordinate point in the second image into a coordinate point under the first coordinate system according to the first inverse perspective transformation matrix to obtain a third image; according to a fourth coordinate point set of the third image, a fifth coordinate point set in the third image is adjusted, wherein the fourth coordinate point set comprises coordinate points of tire landing of a target vehicle in the third image, and the coordinate points in the fifth coordinate point set are in one-to-one correspondence with the coordinate points in the second coordinate point set; and obtaining a second inverse perspective transformation matrix according to the adjusted fifth coordinate point set and the third coordinate point set, wherein the second inverse perspective transformation matrix is used for converting the coordinate points in the second image into the coordinate points in the third image.
In the embodiment of the application, the first wheel track and the first wheel base are taken as references, the first coordinate point set in the first image is corrected, the second coordinate point set in the first image is corrected, the first inverse perspective transformation matrix is obtained based on the corrected second coordinate point set, the first image can be corrected according to the first inverse perspective transformation matrix to obtain a third image, and then the fifth coordinate point set is adjusted according to the relevant characteristics of the tire patch point of the third image, so that the second inverse perspective transformation matrix is obtained. In this way, the accuracy of the determined inverse perspective transformation matrix can be improved by correcting the image a plurality of times based on a fixed reference (such as a tire of a vehicle) in the image.
Optionally, adjusting the second coordinate point set in the first image according to the first coordinate point set in the first image, and the preset first wheel track and the preset first wheel base includes: determining a second track and a second track of the target vehicle according to the first coordinate point set; determining a stretching matrix according to the second wheel track, the first wheel track and the first wheel track, wherein the stretching matrix is used for zooming the first image along a first coordinate system; and obtaining an adjusted second coordinate point set based on the product of the stretching matrix and the second coordinate point set.
In the above alternative embodiment, the stretching matrix may be determined according to the first coordinate point set and the preset first track width and the first wheel base, and the coordinate points in the first image are scaled along the first coordinate system based on the stretching matrix. Because the first wheel track and the first wheel base are relatively fixed, the first image can be adjusted based on the first wheel track and the first wheel base, so that the first inverse perspective transformation matrix is more accurate.
Optionally, adjusting the fifth coordinate point set in the third image according to the fourth coordinate point set of the third image includes: determining an included angle between a first direction in which two wheels of the target vehicle are positioned and a second direction of the third image according to the fourth coordinate point set; and adjusting a fifth coordinate point set in the third image according to the included angle to obtain an adjusted fifth coordinate point set.
In the above optional embodiment, according to the included angle between the first direction in which the two wheels of the target vehicle are located in the third image and the second direction of the third image, the actual angle corresponding to the included angle may be adjusted, so that the fifth coordinate point set is adjusted, and the solved second inverse perspective transformation matrix may be more accurate.
Optionally, an angle between the third direction in which each of the target vehicles is located and the lane line in the first image is smaller than the first threshold.
In the above alternative embodiment, the angle between the third direction and the lane line is defined to be smaller than the first threshold, so that the target vehicle corresponding to the first coordinate set may be more parallel to the lane line, thereby reducing the error caused by the vehicle offset when the second coordinate set is adjusted, and similarly, the target vehicle corresponding to the fourth coordinate set may be more parallel to the lane line, thereby reducing the error caused by the vehicle offset when the fifth coordinate set is adjusted.
Optionally, before adjusting the second coordinate point set in the first image according to the first coordinate point set in the first image, and the preset first track width and the preset first wheel base, the method further includes: determining a third inverse perspective transformation matrix based on the third coordinate point set and the second coordinate point set, wherein coordinate points in the third coordinate point set are in one-to-one correspondence with coordinate points in the second coordinate point set; the first image is determined from the second image and the third inverse perspective transformation matrix.
In the above optional embodiment, the third inverse perspective transformation matrix is determined by a set of coordinate points corresponding to one, so that the first image can be obtained quickly, the generation of the first image is faster, only two sets of corresponding coordinate points are needed to determine the first image, and the determination of the first image is simpler.
Optionally, the method further comprises: a plurality of intersections of two sets of reference lines in the second image are determined, wherein the third set of coordinate points includes coordinates of the plurality of intersections.
In the above alternative embodiment, the third coordinate point set is determined through the intersection point of the reference lines, so that the determination of the third coordinate point set is more accurate, and the determination of the coordinates through the intersection point is easier.
In a second aspect, embodiments of the present application provide an image processing apparatus, including: the adjusting module is used for adjusting a second coordinate point set in the first image according to a first coordinate point set in the first image, a preset first wheel track and a preset first wheel base, wherein the first coordinate point set comprises coordinate points of the ground contact of wheels of a target vehicle in the first image, and the first image is an image of a target scene in a first coordinate system; the acquisition module is used for acquiring a first inverse perspective transformation matrix according to the adjusted second coordinate point set and a third coordinate point set, wherein the coordinate points in the third coordinate point set belong to a second image, the second image is an image of a target scene in a second coordinate system, and the first inverse perspective transformation matrix is used for converting the coordinate points in the second image into the coordinate points in the first image; the acquisition module is further used for converting each coordinate point in the second image into a coordinate point under the first coordinate system according to the first inverse perspective transformation matrix to obtain a third image; the adjusting module is further used for adjusting a fifth coordinate point set in the third image according to a fourth coordinate point set of the third image, wherein the fourth coordinate point set comprises coordinate points of the tire ground of the target vehicle in the third image, and the coordinate points in the fifth coordinate point set are in one-to-one correspondence with the coordinate points in the second coordinate point set; the acquisition module is further configured to obtain a second inverse perspective transformation matrix according to the adjusted fifth coordinate point set and the third coordinate point set, where the second inverse perspective transformation matrix is used to convert the coordinate points in the second image into the coordinate points in the third image.
Optionally, the adjusting module is specifically configured to determine, according to the first coordinate point set, a second track width and a second track width of the target vehicle; determining a stretching matrix according to the second wheel track, the first wheel track and the first wheel track, wherein the stretching matrix is used for zooming the first image along a first coordinate system; and obtaining an adjusted second coordinate point set based on the product of the stretching matrix and the second coordinate point set.
Optionally, the adjusting module is specifically configured to determine, according to the fourth coordinate set, an included angle between a first direction in which two wheels of the target vehicle are located and a second direction of the third image; and adjusting a fifth coordinate point set in the third image according to the included angle to obtain an adjusted fifth coordinate point set.
Optionally, an angle between the third direction in which each of the target vehicles is located and the lane line in the first image is smaller than the first threshold.
Optionally, the acquiring module is further configured to determine a third inverse perspective transformation matrix based on the third coordinate point set and the second coordinate point set, where coordinate points in the third coordinate point set are in one-to-one correspondence with coordinate points in the second coordinate point set; the first image is determined from the second image and the third inverse perspective transformation matrix.
Optionally, the acquiring module is further configured to determine a plurality of intersections of the two sets of reference lines in the second image, where the third coordinate point set includes coordinates of the plurality of intersections.
In a third aspect, embodiments of the present application provide an image processing apparatus, including: at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of the first aspects by executing the instructions stored by the memory.
In a fourth aspect, embodiments of the present application provide a computer program product comprising computer instructions which, when run on a computer, cause the method according to any one of the first aspects to be carried out.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when run on a computer, cause the computer to perform the method of any one of the first aspects.
The advantages of the second to fifth aspects may be discussed with reference to the first aspect, and are not described here.
Drawings
Fig. 1 is a schematic diagram of an image processing scenario provided in an embodiment of the present application;
fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a second image and a first image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a second image and a third image according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions provided by the embodiments of the present application, the following detailed description will be given with reference to the drawings and specific embodiments.
The embodiment of the application provides an image processing method which can be used for being applicable to any scene needing to solve an inverse perspective transformation matrix. Examples of scenarios to which embodiments of the present application may be applied are described below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an image processing scenario provided in an embodiment of the present application is illustrated. The scene comprises an image acquisition device, an image processing device and terminal equipment.
The image processing apparatus may be a server or a terminal device, for example, having processing capability, the server may be a virtual server, such as a cloud server, or a physical server, or the like. The terminal device may be a vehicle-mounted terminal, a wearable device, a television, a mobile phone, a computer or the like. The image pickup device may be a device having an image pickup function such as a camera, a video camera, or a video camera, for example, a monitoring camera mounted on a road surface.
The image acquisition device acquires an image of the road surface, and transmits the image of the road surface (for example, a perspective image) to the image processing device. The image processing device processes the image of the road surface and transmits the processed image (for example, a bird's eye view image) to the terminal device. The terminal device may display the processed image on a screen.
In one possible embodiment, at least two devices of the image acquisition device, the image processing device and the terminal device are the same device, such as an automobile, or are deployed on the same system, such as a smart transportation system, etc.
The image processing method according to the embodiment of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application. The image processing method may be performed by an image processing apparatus as shown in fig. 1.
S201, the image processing apparatus acquires a first image.
The first image is an image of a target scene in a first coordinate system, the target scene including at least one vehicle. Wherein the first coordinate system is the coordinate system of the digitized image of the target scene in the bird's eye view direction.
In a possible embodiment, the first image may be an image pre-stored by the image processing device or an image received from another device.
In another possible implementation manner, the first image may be obtained by converting the second image by the image processing device. The second image belongs to an image of the second coordinate system. For example, the second image is an image of the target scene captured by the image capturing device. The second coordinate system may be a coordinate system centered on the image capturing device, or a coordinate system corresponding to the image by the image capturing device.
The image processing apparatus may convert the second image using a third inverse perspective transformation matrix to obtain the first image.
The manner in which the third inverse perspective transformation matrix is determined is described first.
The image processing apparatus determines a third inverse perspective transformation matrix based on the third coordinate point set and the second coordinate point set in the second image.
Wherein the second set of coordinate points is not a coordinate point of the second image. Optionally, the third coordinate point set includes a plurality of intersection points, and the plurality of intersection points may be a plurality of intersection points of two sets of reference lines in the second image. Alternatively, the set of reference lines may be a set of reference parallel lines or a set of horizontal lines, and the plurality of intersecting points are intersecting points of the set of reference parallel lines and the set of horizontal lines. The coordinate points in the third coordinate point set are in one-to-one correspondence with the coordinate points in the second coordinate point set.
For example, referring to fig. 3, fig. 3 is a schematic diagram of a second image and a first image. The two sets of reference lines are shown in fig. 3 as a set of reference parallel lines and a set of horizontal lines. The image on the left side of fig. 3 can be regarded as an example of a second image. The right image in fig. 3 may be regarded as an example of the first image, the left and right lines in the first image may be regarded as lines corresponding to the reference parallel lines in the second image, specifically, the line a in the second image corresponds to the line a in the first image, the line b in the second image corresponds to the line b in the first image, four hollow vertices of the rectangle in the first image are a second coordinate point set, and coordinate points in the second coordinate point set correspond to the plurality of intersecting points one by one.
In one possible embodiment, the two sets of reference lines in the second image are two sets of reference parallel lines, and in general, the two lines included in the lane lines may be approximately parallel, the two lines included in the zebra stripes may be approximately parallel, the two lines included in the square manhole cover may be approximately parallel, and the two lines included in the regular road edges may be approximately parallel. Thus, in the embodiment of the application, the image processing device may detect the second image by using an image detection algorithm to obtain at least one set of reference parallel lines.
Alternatively, the image detection algorithm may be a lane line detection algorithm, such as a lane net (lane net) algorithm. Specifically, the image processing device may input the second image into a binary segmentation network in the lananet algorithm, and output lane line pixels in the second image. And inputting the lane line pixels into an example segmentation network to obtain different lane line examples, thereby obtaining at least one group of lane lines, namely reference parallel lines.
If there are two pairs of reference parallel lines in the second image, then a plurality of intersections of the two pairs of reference parallel lines may be determined. If only one pair of reference parallel lines is present in the second image, the image processing means may draw two horizontal lines parallel to the X-axis in the second coordinate system of the second image.
Alternatively, two horizontal lines may be drawn below the vanishing point, typically suggesting that the two horizontal lines are drawn at the lower portion of the second image frame. Vanishing points are points in the image where the spatial parallel lines extend to distant horizon lines until convergence, and can be obtained by an image detection algorithm.
Optionally, the second coordinate point set and the third coordinate point set each include at least four coordinate points, so as to obtain a third inverse perspective transformation matrix.
Based on the third coordinate point set and the second coordinate point set, a formula for solving the third inverse perspective transformation matrix can refer to the content of the following formula (1):
the coordinates in the third coordinate point set are (x, y), the coordinates in the second coordinate point set are (x ', y'), and H is a homography matrix, and the homography matrix is used for converting the left side in the third coordinate point set into the coordinates in the second coordinate point set.
From equation (1):
1=h 31 x+h 32 y+h 33 (2)
from the formula (1) and the formula (2), the following formula (3) and formula (4) can be obtained.
x′ i And y' i No separate h in the molecular denominator 33 H' is the third inverse perspective transformation matrix, and may be specifically expressed as the content of the following formula (5).
S202, the image processing device adjusts a second coordinate point set according to a first coordinate point set in the first image, a preset first wheel track and a preset first wheel base.
The first coordinate point set includes coordinate points of the wheel footprint of the target vehicle in the first image. The first track and the first wheelbase are pre-stored in the image processing device.
The first coordinate point set is set according to the first wheel distance, the second wheel distance and the second wheel distance corresponding to the target vehicle are determined, and the coordinates of the first coordinate point set are adjusted according to the second wheel distance, the first wheel distance and the first wheel distance, so that the adjusted first coordinate point set is obtained.
In one possible implementation, S202 may be implemented by S1.1-S1.3, described in detail below.
S1.1, the image processing device determines a second track width and a second track width of the target vehicle according to the first coordinate point set.
In one possible embodiment, the image processing apparatus may identify a vehicle satisfying the first condition from the first image, and take the vehicle as the target vehicle. For example, the image processing apparatus may determine a vehicle in the first image that satisfies the first condition using an image recognition algorithm. The target vehicle may include one or more vehicles. The first condition includes the vehicle type satisfying a first sub-condition and/or the vehicle location satisfying a second sub-condition.
By way of example, the first sub-condition may be that the type of vehicle is a car. The second sub-condition may, for example, include that an angle between a line formed by two wheel attachment points on the right side of the vehicle and a lane line in the first image is smaller than a first threshold value. The first threshold may be preset, typically taking a value of 5 degrees.
Specifically, the vehicle type is determined through an image recognition algorithm, a first vehicle set, wherein the vehicle type is a car, a passing-by-attaching-place detection model in the first vehicle set is selected, two wheel attaching places on the right side of each first vehicle in the first vehicle set are detected, a lane line in a first image is detected through a lane line detection algorithm, and a first vehicle, wherein an included angle between a connecting line formed by the two wheel attaching places on the right side of the first vehicle and the lane line is smaller than a first threshold value, is determined to be used as a target vehicle.
Since the target vehicle may be more than one, the second wheel base may be an average of the target vehicle's wheel bases, and the second wheel base may be an average of the target vehicle's wheel bases.
S1.2, the image processing device determines a stretching matrix according to the second wheel track, the first wheel track and the first wheel track.
The stretching matrix is used to scale the first image along the first coordinate system. The first coordinate system is a coordinate system corresponding to the first image.
In one possible embodiment, the first track width and the first wheel base may be obtained from a first table pre-stored in the image processing device. The first form is stored with wheel base and wheel base corresponding to the vehicle type, the vehicle type corresponding to the target vehicle is searched in the first form, and the obtained wheel base and wheel base are respectively used as the first wheel base and the first wheel base.
One way to determine the stretch matrix is as follows the contents of equations (6) through (8).
The form of the stretch matrix is specifically expressed as the following formula (6).
Wherein x is s And y s The calculation mode of (a) is as follows, equation (7) and equation (8).
Wherein w is 0 Is the first track, l 0 A first wheel base is provided for the first wheel,is the second track length->Is the second axial distance.
S1.3, the image processing device obtains the adjusted second coordinate point set based on the product of the stretching matrix and the second coordinate point set.
The product of the stretch matrix and the second set of coordinate points is calculated as shown in equation (9).
Wherein (x ', y') is one second coordinate in the second coordinate point set, and (x, y) is the adjusted second coordinate corresponding to the second coordinate, H 1 Is a stretch matrix.
And S203, the image processing device obtains a first inverse perspective transformation matrix according to the adjusted second coordinate point set and the third coordinate point set.
The first inverse perspective transformation matrix is used for converting coordinate points in the first image into coordinate points in the second image, the structure of the first inverse perspective transformation matrix is the same as that of the third inverse perspective transformation matrix, 8 unknowns are all obtained, and the coordinates in the adjusted second coordinate point set and the coordinates in the third coordinate point set are two-dimensional coordinates, and the coordinates in the adjusted second coordinate point set and the coordinates in the third coordinate point set are in one-to-one correspondence, so that the adjusted second coordinate point set and the adjusted third coordinate point set at least comprise four coordinate points and can be resolved to obtain the first inverse perspective transformation matrix.
The method for solving the first inverse perspective transformation matrix is the same as the method for solving the third inverse perspective transformation matrix, and will not be described herein.
S204, the image processing device converts each coordinate point in the second image into a coordinate system under the first coordinate system according to the first inverse perspective transformation matrix, and a third image is obtained.
And respectively multiplying each coordinate point in the second image by the first inverse perspective transformation matrix to obtain a set of coordinate points as a third image, wherein the specific calculation method can refer to the formula 3 and the formula 4.
S205, the image processing apparatus adjusts the fifth coordinate point set in the third image according to the fourth coordinate point set of the third image.
The fourth coordinate point set comprises coordinate points of the tire of the target vehicle in the third image, and the coordinate points in the fifth coordinate point set are in one-to-one correspondence with the coordinate points in the second coordinate point set.
For example, referring to fig. 4, a schematic diagram of a second image and a third image is shown, where the second coordinate point set is the intersection point of the reference lines in the left graph of fig. 4, and the fifth coordinate point set is the four vertex angles of the dashed rectangle in the right graph of fig. 4.
In one possible embodiment, the fourth coordinate point set may be selected in the same manner by selecting the target vehicle in the third image and the target vehicle in the foregoing S1.1. And detecting the attaching place of the target vehicle in the third image through an image detection algorithm, and obtaining a fourth coordinate point set.
Specifically, reference may be made to the right side of fig. 4, which is an example of the third image. The included angle between the third direction in which the right two wheels in the target vehicle are located and the lane line in the third image is smaller than the first threshold value. The third direction may be a direction corresponding to a straight line formed by the right front wheel attachment point and the right rear wheel attachment point, the lane line in the third image may be regarded as a line c in fig. 4, and the angle between the third direction and the lane line may be regarded as an angle between the third direction and a parallel line of the lane line, i.e. an angle θ in the figure.
In one possible embodiment, the image processing apparatus adjusts the content of the fifth coordinate point set in the third image according to the fourth coordinate point set of the third image may include the following contents S2.1 and S2.2.
S2.1, the image processing device determines an included angle between a first direction in which two wheels of the target vehicle are located and a second direction of the third image according to the fourth coordinate point set.
The second direction of the third image may be a lane line direction in the third image, and the first directions of the two wheels of the target vehicle may be the first directions of the two rear wheels of the target vehicle.
For example, according to the equation corresponding to the first direction and the equation corresponding to the second direction, the included angle between the first direction and the second direction is calculated according to the included angle formula.
For example, with continued reference to fig. 4, the first direction may be a direction corresponding to a straight line formed by the left rear wheel contact point and the right rear wheel contact point of the target vehicle, and the second direction may be regarded as a right side edge of the rectangle on the right side of fig. 4, and an angle between the first direction and the second direction is β.
S2.2, the image processing device adjusts the fifth coordinate point set in the third image according to the included angle to obtain an adjusted fifth coordinate point set.
In one possible embodiment, the shear matrix is generated by the included angle, and the adjusted fifth coordinate point set is obtained by a product of the shear matrix and the fifth coordinate point set. Wherein the shear matrix refers to transformation of a coordinate system for non-uniform stretching of an image or coordinate points.
By way of example, a shear matrix is specifically as follows:
wherein, is the complementary angle of the included angle beta.
S206, the image processing device obtains a second inverse perspective transformation matrix according to the adjusted fifth coordinate point set and the third coordinate point set.
The second inverse perspective transformation matrix is used to convert the coordinate points in the second image into the coordinate points in the third image, and of course, the second inverse perspective transformation matrix may also be used to convert the coordinate points in the third image into the coordinate points in the second image.
Specifically, the structure of the second inverse perspective transformation matrix is the same as that of the third inverse perspective transformation matrix, and 8 unknowns are all included, and as the coordinates in the adjusted fifth coordinate point set and the coordinates in the third coordinate point set are two-dimensional coordinates and the coordinates in the adjusted fifth coordinate point set and the coordinates in the third coordinate point set are in one-to-one correspondence, the adjusted fifth coordinate point set and the adjusted third coordinate point set at least contain four coordinate points so as to obtain the first inverse perspective transformation matrix.
The specific method for solving the second inverse perspective transformation matrix is the same as the third inverse perspective transformation matrix solving method, and will not be described herein.
Based on the same inventive concept, the embodiments of the present application provide an image processing apparatus. Referring to fig. 5, an image processing apparatus is provided in an embodiment of the present application.
As shown in fig. 5, the image processing apparatus 500 includes an adjustment module 501 and an acquisition module 502. The image processing apparatus 500 may be applied to the image processing apparatus shown in fig. 1, or a software or hardware module in the image processing apparatus shown in fig. 1, or the like.
The adjustment module 501 is configured to adjust a second set of coordinate points in the first image according to a first set of coordinate points in the first image, and a preset first track and a preset first wheelbase, where the first set of coordinate points includes coordinate points of a wheel of the target vehicle in the first image that is an image of the target scene in the first coordinate system;
the obtaining module 502 is configured to obtain a first inverse perspective transformation matrix according to the adjusted second coordinate point set and a third coordinate point set, where coordinate points in the third coordinate point set belong to a second image, the second image is an image of the target scene in the second coordinate system, and the first inverse perspective transformation matrix is used to convert coordinate points in the second image into coordinate points in the first image; the obtaining module 502 is further configured to convert each coordinate point in the second image into a coordinate point in the first coordinate system according to the first inverse perspective transformation matrix, so as to obtain a third image;
the adjusting module 501 is further configured to adjust a fifth coordinate point set in the third image according to a fourth coordinate point set of the third image, where the fourth coordinate point set includes coordinate points of the tire of the target vehicle in the third image that are attached to the ground, and the coordinate points in the fifth coordinate point set are in one-to-one correspondence with the coordinate points in the second coordinate point set;
the obtaining module 502 is further configured to obtain a second inverse perspective transformation matrix according to the adjusted fifth coordinate point set and the third coordinate point set, where the second inverse perspective transformation matrix is used to convert the coordinate points in the second image into the coordinate points in the third image.
Optionally, the adjusting module 501 is specifically configured to determine, according to the first coordinate point set, a second track pitch and a second track pitch of the target vehicle; determining a stretching matrix according to the second wheel track, the first wheel track and the first wheel track, wherein the stretching matrix is used for zooming the first image along a first coordinate system; and obtaining an adjusted second coordinate point set based on the product of the stretching matrix and the second coordinate point set.
Optionally, the adjusting module 501 is specifically configured to determine, according to the fourth coordinate set, an included angle between a first direction in which two wheels of the target vehicle are located and a second direction of the third image; and adjusting a fifth coordinate point set in the third image according to the included angle to obtain an adjusted fifth coordinate point set.
Optionally, an angle between the third direction in which each of the target vehicles is located and the lane line in the first image is smaller than the first threshold.
Optionally, the obtaining module 502 is further configured to determine a third inverse perspective transformation matrix based on the third coordinate point set and the second coordinate point set, where coordinate points in the third coordinate point set are in one-to-one correspondence with coordinate points in the second coordinate point set; the first image is determined from the second image and the third inverse perspective transformation matrix.
Optionally, the obtaining module 502 is further configured to determine a plurality of intersections of the two sets of reference lines in the second image, where the third coordinate point set includes coordinates of the plurality of intersections.
The steps performed by the adjustment module 501 and the obtaining module 502 may be discussed with reference to fig. 2, and are not described herein.
Based on the same inventive concept, the embodiments of the present application provide an image processing apparatus. Referring to fig. 6, an embodiment of the present application provides an image processing apparatus 600 including: at least one processor 601, and a memory 602 communicatively coupled to the at least one processor 601; wherein the memory 602 stores instructions executable by the at least one processor 601, the at least one processor 601 implements the method of image processing as shown in fig. 2 by executing the instructions stored by the memory 602.
The processor 601 may be a central processing unit (central processing unit, CPU), other general purpose processor, digital signal processor (digital signal processor, DSP), application specific integrated circuit (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, transistor logic device, hardware components, or any combination thereof.
The memory 602 may include volatile memory (RAM), such as random access memory (random access memory). The memory may also include a non-volatile memory (ROM), such as a read-only memory (ROM), a flash memory, a mechanical hard disk (HDD), or a solid state disk (solid state drive, SSD).
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when run on a computer, cause the computer to perform a method of image processing as shown in fig. 2.
Based on the same inventive concept, embodiments of the present application provide a computer program product, which contains computer instructions that, when run on a computer, cause the computer to perform the method of image processing shown in fig. 2 described above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. An image processing method, comprising:
according to a first coordinate point set in a first image, a preset first wheel track and a preset first wheel base, a second coordinate point set in the first image is adjusted, the first coordinate point set comprises coordinate points of a target vehicle, which are attached to the ground, in the first image, and the first image is an image of a target scene in a first coordinate system;
obtaining a first inverse perspective transformation matrix according to the adjusted second coordinate point set and a third coordinate point set, wherein coordinate points in the third coordinate point set belong to a second image, the second image is an image of the target scene in a second coordinate system, and the first inverse perspective transformation matrix is used for converting coordinate points in the second image into coordinate points in the first image;
converting each coordinate point in the second image into a coordinate point under the first coordinate system according to the first inverse perspective transformation matrix to obtain a third image;
according to a fourth coordinate point set of the third image, a fifth coordinate point set in the third image is adjusted, wherein the fourth coordinate point set comprises coordinate points of the tire landing of the target vehicle in the third image, and the coordinate points in the fifth coordinate point set are in one-to-one correspondence with the coordinate points in the second coordinate point set;
and obtaining a second inverse perspective transformation matrix according to the adjusted fifth coordinate point set and the third coordinate point set, wherein the second inverse perspective transformation matrix is used for converting the coordinate points in the second image into the coordinate points in the third image.
2. The method of claim 1, wherein adjusting the second set of coordinate points in the first image based on the first set of coordinate points in the first image and the preset first wheel base and first wheel base comprises:
determining a second wheel track and a second wheel track of the target vehicle according to the first coordinate point set;
determining a stretching matrix according to the second wheel track, the first wheel track and the first wheel track, wherein the stretching matrix is used for scaling a first image along a first coordinate system;
and obtaining the adjusted second coordinate point set based on the product of the stretching matrix and the second coordinate point set.
3. The method of claim 1, wherein adjusting the fifth set of coordinate points in the third image based on the fourth set of coordinate points of the third image comprises:
determining an included angle between a first direction in which two wheels of the target vehicle are located and a second direction of the third image according to the fourth coordinate point set;
and adjusting a fifth coordinate point set in the third image according to the included angle to obtain the adjusted fifth coordinate point set.
4. The method of claim 1, wherein an angle between a third direction in which each of the target vehicles is located and a lane line in the first image is less than a first threshold.
5. The method of any of claims 1-4, wherein prior to adjusting the second set of coordinate points in the first image based on the first set of coordinate points in the first image and the preset first track and first wheelbase, the method further comprises:
determining a third inverse perspective transformation matrix based on the third coordinate point set and the second coordinate point set, wherein coordinate points in the third coordinate point set are in one-to-one correspondence with coordinate points in the second coordinate point set;
the first image is determined from the second image and the third inverse perspective transformation matrix.
6. The method of claim 5, wherein the method further comprises:
a plurality of intersection points of two sets of reference lines in the second image are determined, wherein the third set of coordinate points includes coordinates of the plurality of intersection points.
7. An image processing apparatus, characterized in that the apparatus comprises:
the adjusting module is used for adjusting a second coordinate point set in the first image according to a first coordinate point set in the first image, a preset first wheel track and a preset first wheel base, wherein the first coordinate point set comprises coordinate points of the ground contact of wheels of a target vehicle in the first image, and the first image is an image of a target scene in a first coordinate system;
the acquisition module is used for acquiring a first inverse perspective transformation matrix according to the adjusted second coordinate point set and a third coordinate point set, wherein coordinate points in the third coordinate point set belong to a second image, the second image is an image of the target scene in a second coordinate system, and the first inverse perspective transformation matrix is used for converting coordinate points in the second image into coordinate points in the first image;
the acquisition module is further configured to convert each coordinate point in the second image into a coordinate point in the first coordinate system according to the first inverse perspective transformation matrix, so as to obtain a third image;
the adjusting module is further configured to adjust a fifth coordinate point set in the third image according to a fourth coordinate point set of the third image, where the fourth coordinate point set includes coordinate points of the tire of the target vehicle in the third image, and the coordinate points in the fifth coordinate point set are in one-to-one correspondence with the coordinate points in the second coordinate point set;
the obtaining module is further configured to obtain a second inverse perspective transformation matrix according to the adjusted fifth coordinate point set and the third coordinate point set, where the second inverse perspective transformation matrix is used to convert the coordinate points in the second image into the coordinate points in the third image.
8. An image processing apparatus, characterized by comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any of claims 1-6 by executing the memory stored instructions.
9. A computer program product comprising computer instructions which, when run on a computer, cause the method of any of claims 1-6 to be carried out.
10. A computer readable storage medium storing computer instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-6.
CN202311815249.3A 2023-12-26 2023-12-26 Image processing method, device and equipment Pending CN117830967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311815249.3A CN117830967A (en) 2023-12-26 2023-12-26 Image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311815249.3A CN117830967A (en) 2023-12-26 2023-12-26 Image processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN117830967A true CN117830967A (en) 2024-04-05

Family

ID=90507208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311815249.3A Pending CN117830967A (en) 2023-12-26 2023-12-26 Image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN117830967A (en)

Similar Documents

Publication Publication Date Title
US11205284B2 (en) Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device
JP5375958B2 (en) Image processing apparatus and image processing method
CN107993263B (en) Automatic calibration method for panoramic system, automobile, calibration device and storage medium
US9771080B2 (en) Road surface gradient detection device
CN109543493B (en) Lane line detection method and device and electronic equipment
CN112507862B (en) Vehicle orientation detection method and system based on multitasking convolutional neural network
US9747507B2 (en) Ground plane detection
CN107845101B (en) Method and device for calibrating characteristic points of vehicle-mounted all-round-view image and readable storage medium
JP2013137767A (en) Obstacle detection method and driver support system
CN112927306B (en) Calibration method and device of shooting device and terminal equipment
CN114067001B (en) Vehicle-mounted camera angle calibration method, terminal and storage medium
CN112927309A (en) Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium
US20200242391A1 (en) Object detection apparatus, object detection method, and computer-readable recording medium
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN110310335B (en) Camera angle determination method, device, equipment and system
CN114972427A (en) Target tracking method based on monocular vision, terminal equipment and storage medium
CN113256701B (en) Distance acquisition method, device, equipment and readable storage medium
CN112668505A (en) Three-dimensional perception information acquisition method of external parameters based on road side camera and road side equipment
CN112837404B (en) Method and device for constructing three-dimensional information of planar object
CN114037977B (en) Road vanishing point detection method, device, equipment and storage medium
CN112132902B (en) Vehicle-mounted camera external parameter adjusting method and device, electronic equipment and medium
CN117830967A (en) Image processing method, device and equipment
JP6492603B2 (en) Image processing apparatus, system, image processing method, and program
EP3389015A1 (en) Roll angle calibration method and roll angle calibration device
US9519833B2 (en) Lane detection method and system using photographing unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination