CN109815831B - Vehicle orientation obtaining method and related device - Google Patents

Vehicle orientation obtaining method and related device Download PDF

Info

Publication number
CN109815831B
CN109815831B CN201811626175.8A CN201811626175A CN109815831B CN 109815831 B CN109815831 B CN 109815831B CN 201811626175 A CN201811626175 A CN 201811626175A CN 109815831 B CN109815831 B CN 109815831B
Authority
CN
China
Prior art keywords
vehicle
orientation
rectangle
lane
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811626175.8A
Other languages
Chinese (zh)
Other versions
CN109815831A (en
Inventor
苏英菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN201811626175.8A priority Critical patent/CN109815831B/en
Publication of CN109815831A publication Critical patent/CN109815831A/en
Application granted granted Critical
Publication of CN109815831B publication Critical patent/CN109815831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a vehicle orientation obtaining method and a related device. Firstly, obtaining an overhead view image of a vehicle, wherein the overhead view image contains the direction information of a lane where the vehicle is located; then, the overlook image is segmented by utilizing a deep neural network, and a processed image is obtained, wherein the processed image comprises a closed area representing the vehicle; obtaining a rectangle according to the closed area; and finally, determining the orientation of the vehicle by using the direction information of the rectangle and the lane. When the deep neural network is used for segmenting and processing the image, the method is not influenced by the pose of the vehicle in the image, compared with the prior art, the method has wider application range of vehicle identification, namely, the method can realize accurate identification on the orientation of the vehicle with a non-horizontal or non-vertical pose. In addition, the direction information of the lane is combined, the vehicle represented by the rectangle is matched with the direction of the lane in the actual road condition, and the accuracy of the obtained vehicle direction is effectively improved.

Description

Vehicle orientation obtaining method and related device
Technical Field
The present application relates to the field of intelligent transportation technologies, and in particular, to a vehicle orientation obtaining method and a related device.
Background
With the improvement of the living standard of people, the number of vehicles used by people in daily life during traveling is continuously increased. The method provides more convenient service for people to go out and park, and some map systems and parking lot systems provide driving path planning of the self vehicle for users through intelligent recognition of vehicles in driving scenes.
However, the orientation information of the vehicle is very important for the driving path planning. This is because the vehicle orientation may deviate from the lane direction and tends to correlate with the vehicle's subsequent driving behavior. Therefore, if the obtained vehicle orientation is inaccurate, the application of path planning and the like is easily influenced.
There are some methods for determining the orientation of a vehicle, for example by providing radar, which is expensive and susceptible to interference from obstacles in the detection range. There are also object detection methods such as fast-rcnn and yolo, but these methods recognize the orientation of a vehicle in a horizontal or vertical attitude in an image, and cannot recognize the orientation of a vehicle in an oblique attitude in an image. The pose of the vehicle in the image is variable, so the application scene of the method is very limited.
Therefore, the existing methods cannot accurately identify the vehicle orientation, and how to acquire the accurate vehicle orientation becomes a technical problem which needs to be solved in the field.
Disclosure of Invention
Based on the above problems, the application provides a vehicle orientation obtaining method and a related device, so as to realize accurate identification of orientations of vehicles with different poses.
The embodiment of the application discloses the following technical scheme:
in a first aspect, the present application provides a vehicle orientation acquisition method, including:
acquiring an overhead view image of a vehicle, wherein the overhead view image contains the direction information of a lane where the vehicle is located;
utilizing a depth neural network to perform segmentation processing on the overlook image to obtain a processed image, wherein the processed image comprises a closed area representing the vehicle;
obtaining a rectangle according to the closed area;
determining the orientation of the vehicle using the orientation information of the rectangle and the lane.
Optionally, the obtaining a rectangle according to the closed region specifically includes:
obtaining coordinate values of edge pixels of the closed area;
and obtaining the minimum circumscribed rectangle or the maximum inscribed rectangle of the closed region according to the coordinate values of the edge pixels.
Optionally, the determining the orientation of the vehicle by using the direction information of the rectangle and the lane specifically includes:
obtaining coordinate values of any two pixels on one long side of the rectangle;
determining vehicle orientation vectors respectively taking the two pixels as a starting point and an end point according to the direction information of the lane;
and acquiring an included angle between the orientation vector of the vehicle and a coordinate axis by using an inverse trigonometric function according to the coordinate values of the two pixels, and determining the orientation of the vehicle.
Optionally, the method further comprises:
and determining the orientation of the vehicle in the geographic coordinate system according to the direction represented by the coordinate axis in the geographic coordinate system and the included angle.
Optionally, after obtaining a rectangle according to the closed region, the method further includes:
obtaining coordinate values of a plurality of vertexes of the rectangle; at least two diagonal vertices are included in the plurality of vertices;
obtaining coordinate values of the center of mass of the vehicle according to the coordinate values of the plurality of vertexes;
and carrying out coordinate transformation on the coordinate value of the centroid to obtain the position of the vehicle in a geographic coordinate system.
Optionally, after the determining the orientation of the vehicle, the method further comprises:
and simulating the running condition of the vehicle according to the orientation of the vehicle.
In a second aspect, the present application provides a vehicle orientation obtaining apparatus, including:
the system comprises an image acquisition module, a display module and a control module, wherein the image acquisition module is used for acquiring an overhead view image of a vehicle, and the overhead view image contains the direction information of a lane where the vehicle is located;
the image segmentation module is used for carrying out segmentation processing on the overlook image by utilizing a depth neural network to obtain a processed image, and the processed image comprises a closed area representing the vehicle;
the rectangle obtaining module is used for obtaining a rectangle according to the closed area;
and the vehicle orientation determining module is used for determining the orientation of the vehicle by utilizing the direction information of the rectangle and the lane.
Optionally, the rectangle obtaining module specifically includes:
a coordinate value first acquisition unit configured to acquire coordinate values of edge pixels of the closed region;
and the rectangle acquisition unit is used for acquiring the minimum circumscribed rectangle or the maximum inscribed rectangle of the closed area according to the coordinate values of the edge pixels.
Optionally, the vehicle orientation determining module specifically includes:
a second coordinate value acquisition unit configured to acquire coordinate values of any two pixels on one long side of the rectangle;
the vehicle orientation vector determining unit is used for determining vehicle orientation vectors which respectively take the two pixels as a starting point and an end point according to the direction information of the lane;
and the vehicle orientation determining unit is used for acquiring an included angle between the vehicle orientation vector and the coordinate axis by using an inverse trigonometric function according to the coordinate values of the two pixels, and determining the orientation of the vehicle.
Optionally, the apparatus further comprises:
and the vehicle geographic orientation determining module is used for determining the orientation of the vehicle in a geographic coordinate system according to the direction represented by the coordinate axis in the geographic coordinate system and the included angle.
Optionally, the apparatus further comprises:
the vertex coordinate value acquisition module is used for acquiring coordinate values of a plurality of vertexes of the rectangle; at least two diagonal vertices are included in the plurality of vertices;
the mass center coordinate value calculation module is used for obtaining the coordinate value of the mass center of the vehicle according to the coordinate values of the plurality of vertexes;
and the vehicle geographic position acquisition module is used for carrying out coordinate transformation on the coordinate value of the centroid to acquire the position of the vehicle in a geographic coordinate system.
Optionally, the apparatus further comprises:
and the vehicle running condition simulation module is used for simulating the running condition of the vehicle according to the orientation of the vehicle.
Compared with the prior art, the method has the following beneficial effects:
the method for acquiring the vehicle orientation includes the steps that firstly, an overhead view image of a vehicle is acquired, wherein the overhead view image contains the direction information of a lane where the vehicle is located; then, the overlook image is segmented by utilizing a deep neural network, and a processed image is obtained, wherein the processed image comprises a closed area representing the vehicle; obtaining a rectangle according to the closed area; and finally, determining the orientation of the vehicle by using the direction information of the rectangle and the lane.
When the image is segmented and processed by the deep neural network, the influence of the pose of the vehicle in the image is avoided, and even if the vehicle is not in the horizontal or vertical pose in the image, the closed area representing the vehicle can be segmented and obtained. In addition, the direction information of the lane is combined, the vehicle represented by the rectangle is matched with the direction of the lane in the actual road condition, and the accuracy of the obtained vehicle direction is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a vehicle orientation obtaining method according to an embodiment of the present application;
FIG. 2 is a schematic top view of an embodiment of the present application;
fig. 3 is a top view image input to a MultiNet segmentation network according to an embodiment of the present application;
fig. 4 is an image output after the MultiNet segmentation network provided in the embodiment of the present application performs the segmentation processing on fig. 3;
FIG. 5 is a schematic diagram of a rectangle obtained by processing a closed region according to an embodiment of the present application;
FIG. 6 is a flow chart of another vehicle orientation acquisition method provided by the embodiments of the present application;
FIG. 7a is a schematic view of a first vehicle orientation provided by an embodiment of the present application;
FIG. 7b is a schematic view of a second vehicle orientation provided by an embodiment of the present application;
FIG. 7c is a schematic view of a third vehicle orientation provided by an embodiment of the present application;
FIG. 7d is a schematic view of a fourth vehicle orientation provided by the exemplary embodiment of the present application;
fig. 8 is a schematic structural diagram of a vehicle orientation obtaining device according to an embodiment of the present application.
Detailed Description
According to the foregoing description, some vehicle orientation identification methods that exist at present have a problem of insufficient identification accuracy. The fast-rcnn, yolo and other target detection methods are difficult to identify the orientation of the vehicle, or can only identify the orientation of the vehicle in a special pose, namely cannot identify the orientation of the vehicle in other poses.
Based on the above problems, the inventors have studied to provide a vehicle orientation acquisition method and apparatus. The method comprises the steps of obtaining an overhead view image of a vehicle, segmenting the image by using a deep neural network, identifying a rectangle representing the vehicle according to a segmented closed region representing the vehicle, and finally determining the orientation of the vehicle by using direction information of a lane originally contained in the rectangle and the overhead view image. The method is not limited by the pose of the vehicle in the image. In addition, the image is segmented by the deep neural network, the accuracy of the segmented closed region is high, and further the matching degree of the rectangle and the vehicle is extremely high. In addition, the method references the direction information of the lane and matches the rectangle with the lane direction, so that the finally determined vehicle orientation has high matching performance with the actual road condition. Therefore, the vehicle orientation accuracy obtained by the method is effectively improved.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First embodiment
Referring to fig. 1, the figure is a flowchart of a vehicle orientation obtaining method provided in an embodiment of the present application.
As shown in fig. 1, the vehicle orientation acquiring method according to the present embodiment includes:
step 101: the method comprises the steps of obtaining an overhead view image of a vehicle, wherein the overhead view image comprises the direction information of a lane where the vehicle is located.
In practical application, the overlook image of the vehicle can be acquired by methods such as unmanned aerial vehicle aerial photography or a camera device arranged at high altitude. One overhead image may contain one or more vehicles, each of which is traveling on a roadway.
It is understood that the driving direction of each lane in the overhead image is determined, for example, from south to north, from east to west, and the like. Whether the vehicle is identified manually or by a machine, the direction information of the lane where the vehicle is located can be acquired on the basis of the overhead view image.
Referring to fig. 2, a schematic top view image provided in the present embodiment is shown. As can be seen from fig. 2, the overhead image includes a plurality of lanes, each lane having a vehicle traveling thereon. If the lane is marked with a driving direction arrow 201, the driving direction arrow can be used as the direction information of the lane where the vehicle 202 is located on the overhead image. However, if no directional arrow is indicated on the lane, the direction of the lane may be determined according to the adjacent relationship of the lane in which the vehicle is located and the non-lane area. Taking fig. 2 as an example, the lane where the vehicle 202 is located is composed of a lane line 204 and a lane line 205, wherein the lane line 204 is adjacent to another lane, and the lane line 205 is adjacent to the non-lane area 203, and according to the driving rule, the right side of the vehicle is the non-lane area when the vehicle is driving on the lane, so that it can be determined that the direction information of the lane where the vehicle 202 is located is the same as the direction indicated by the driving direction arrow 201 in the drawing. It can be seen that each overhead image directly or indirectly contains the directional information of the lane in which the vehicle to be oriented is located.
Step 102: and carrying out segmentation processing on the overlook image by utilizing a deep neural network to obtain a processed image, wherein the processed image comprises a closed area representing the vehicle.
As a possible implementation, this step may employ MultiNet to segment the top view image. MultiNet has two parts, namely an identification network KittiBox and a segmentation network kittiSeg, and in the embodiment, the top view image can be segmented by only using the segmentation network. In the split network, since there is a large difference between the vehicle and the lane in the top view image as input and between the vehicle and other objects in the top view image, each vehicle in the original top view image is processed as a closed region distinguished from the outside in the processed image output by the split network.
Referring to fig. 3 and 4, fig. 3 is a top view image input to the MultiNet segmentation network provided in this embodiment, and fig. 4 is an image output after the MultiNet segmentation network provided in this embodiment performs segmentation processing on fig. 3. As can be seen from comparing fig. 3 and fig. 4, the vehicle in fig. 3 is converted into the closed region in fig. 4 after being processed by the MultiNet split network, and the vehicle in fig. 3 and the closed region in fig. 4 correspond to each other.
It is understood that this embodiment is only exemplified by MultiNet, and other trained neural networks may be used to segment the input top view image in practical applications.
Step 103: a rectangle is obtained from the occlusion region.
As can be seen from step 102 in conjunction with fig. 3 and 4, in the processed image obtained after the overhead image segmentation process, although the position and the contour of the closed region are very close to those of the vehicle in the overhead image, the contour of the closed region is not smooth and regular, and if the vehicle orientation is determined according to the closed region, there may still be an error, resulting in insufficient accuracy of the identified vehicle orientation.
To solve this problem, this step further processes the closed region to obtain a regular rectangle that can represent the vehicle.
Two alternative implementations are provided below.
First, the edge pixels of the closed area can be searched, and the coordinate value of each edge pixel of the closed area is obtained; thereafter, a minimum bounding rectangle of the closed region is constructed based on the coordinate values of the edge pixels.
Referring to fig. 5, a schematic diagram of a rectangle obtained by processing a closed region in the above manner is provided in the present application.
In a second implementation manner, first, edge pixels of a closed region may be searched, and coordinate values of each edge pixel of the closed region are obtained; thereafter, the maximum inscribed rectangle of the closed region is obtained from the coordinate values of the edge pixels.
As can be seen from fig. 5, the closed regions representing vehicles but having irregular contours are treated as rectangles representing vehicles and having regular contours in this embodiment through the treatment of this step. The rectangular outline is very regular, so that the method can be used as an important basis for identifying the orientation of the vehicle.
It is understood that the above is only an alternative implementation of step 103 provided for the present embodiment. In practice, other algorithms may be used to obtain a rectangular figure representing the vehicle based on the closed regions in the processed image. Therefore, in this embodiment, the specific implementation manner of obtaining one rectangle according to the closed region is not limited.
Step 104: determining the orientation of the vehicle using the orientation information of the rectangle and the lane.
It will be appreciated that in practice, the vehicle, although travelling on a lane, does not necessarily have to be oriented exactly in line with the direction of the lane. For example, when a driver drives a vehicle to overtake, as a preparation, the driver must drive the vehicle to travel to the left lane of the current lane, and at this time, the vehicle has an included angle that is deviated to the left relative to the direction of the current lane; and after the vehicle runs to the left lane, if the driver wants to drive back to the original lane, the driver also needs to control the vehicle to run rightwards, and at the moment, an included angle which is deviated rightwards exists in the direction of the vehicle relative to the current lane.
As can be seen from the above examples, the orientation of the vehicle does not necessarily coincide exactly with the orientation of the lane. The orientation of the vehicle may be exactly the same as the orientation of the roadway and may also include an acute angle. Under normal driving conditions, the orientation of the vehicle does not form a right or obtuse angle with the orientation of the roadway.
Therefore, in this embodiment, the direction of the vehicle can be determined by combining the direction information of the lane where the vehicle is located with the rectangle.
The above is the vehicle direction obtaining method provided in the embodiment of the present application. Firstly, obtaining an overhead view image of a vehicle, wherein the overhead view image comprises the direction information of a lane where the vehicle is located; then, the overlook image is segmented by utilizing a deep neural network, and a processed image is obtained, wherein the processed image comprises a closed area representing the vehicle; obtaining a rectangle according to the closed area; and finally, determining the orientation of the vehicle by using the direction information of the rectangle and the lane.
When the image is segmented and processed by the deep neural network, the influence of the pose of the vehicle in the image is avoided, and even if the vehicle is not in the horizontal or vertical pose in the image, the closed area representing the vehicle can be segmented and obtained. In addition, the direction information of the lane is combined, the vehicle represented by the rectangle is matched with the direction of the lane in the actual road condition, and the accuracy of the obtained vehicle direction is effectively improved.
Based on the foregoing embodiments, the present application further provides another vehicle orientation obtaining method, and specific implementation of the method is described in detail below with reference to the embodiments and the accompanying drawings.
Second embodiment
Referring to fig. 6, the present application provides a flowchart of another vehicle orientation obtaining method.
As shown in fig. 6, the vehicle orientation acquiring method according to the present embodiment includes:
step 601: the method comprises the steps of obtaining an overhead view image of a vehicle, wherein the overhead view image comprises the direction information of a lane where the vehicle is located.
Step 602: and carrying out segmentation processing on the overlook image by utilizing a deep neural network to obtain a processed image, wherein the processed image comprises a closed area representing the vehicle.
It should be noted that, in this embodiment, step 601 and step 602 are the same as the implementation manners of step 101 and step 102 in the foregoing embodiment, and reference may be made to the foregoing embodiment for the relevant description of step 601 and step 602. This is not described in detail in this embodiment.
Step 603: and acquiring coordinate values of the edge pixels of the closed area.
Step 604: and obtaining the minimum circumscribed rectangle of the closed area according to the coordinate values of the edge pixels.
Step 605: and acquiring coordinate values of any two pixels on one long side of the rectangle.
Taking the long side AB of the rectangle ABCD in fig. 5 as an example, any two pixel points on the segment AB can be taken in this step. For convenience of description, in the present embodiment, the point a and the point B are taken as examples respectively for subsequent description. The coordinate value of point A in FIG. 5 is PA(XA,YA) The coordinate value of point B in FIG. 5 is PB(XB,YB)。
Step 606: and determining the vehicle orientation vectors respectively taking the two pixels as a starting point and an end point according to the direction information of the lane.
Obviously, under normal driving conditions, the orientation of the vehicle does not form an angle greater than 90 ° with the directional information of the lane. Therefore, the direction range of the vehicle can be roughly determined by the direction information of the lane. Taking fig. 5 as an example, two pixel points on the long side of the rectangle are a and B, respectively, and a vector AB or a vector BA can be formed according to the fact that the point a and the point B are respectively used as a vector start point and a vector end point. Because the included angle between the orientation of the vehicle and the orientation information of the lane is not more than 90 degrees, the included angle between one vector and the orientation information of the lane is inevitably larger than 90 degrees in the vector AB and the vector BA, and the included angle between the other vector and the orientation information of the lane is smaller than 90 degrees, therefore, the vector AB can be determined as the vehicle orientation vector or the vector BA can be determined as the vehicle orientation vector according to the orientation information of the lane in the determined vehicle orientation range.
Step 607: and acquiring an included angle between the orientation vector of the vehicle and a coordinate axis by using an inverse trigonometric function according to the coordinate values of the two pixels, and determining the orientation of the vehicle.
Since the coordinate values of the two pixels and the vehicle orientation vector determined by the two pixels are obtained in steps 605 and 606, respectively, in order to determine the orientation of the vehicle, in this embodiment, the coordinate axis is used as a reference, an included angle between the vehicle orientation vector and the coordinate axis is obtained, and the orientation of the vehicle is determined by the included angle.
As a possible implementation, the positive direction of the coordinate axis X is taken as a reference in the present embodiment. The implementation of the vehicle orientation determination will be described with reference to the four diagrams of fig. 7a to 7 d.
Fig. 7a to 7d are schematic views of four vehicle orientations, respectively.
As shown in fig. 7a to 7d, in the present embodiment, the angle α between the vehicle direction vector and the positive X-axis direction is the angle of the vehicle direction. Obviously, when the coordinate values of the start point and the end point of the vehicle heading vector are known, the angle α can be specifically obtained by an arctangent function (or an arccosine function, an arcsine function, or the like). In this embodiment, the following are defined: if the angle α (α is less than or equal to 180 ° in absolute value) is rotated counterclockwise with respect to the X-axis, the angle α is a negative angle; if the angle α (the absolute value of α is less than or equal to 180 °) is obtained clockwise with respect to the X axis, the angle α is a positive angle.
In fig. 7a and 7b, the determined vehicle orientation vector is vector BA, and in fig. 7c and 7d, the determined vehicle orientation vector is vector AB. In fig. 7a and 7c, the vehicle orientation vector forms an acute angle α with the positive direction of the X-axis; in fig. 7b and 7d, the vehicle heading vector makes an obtuse angle α with the positive X-axis.
Based on the above angle definition, in fig. 7a and 7b, the included angle α between the vehicle heading vector BA and the positive direction of the X axis is a negative angle; in fig. 7c and 7d, the angle α between the vehicle heading vector AB and the positive direction of the X-axis is a positive angle.
It will be appreciated that, in practical applications, the orientation of the vehicle in the geographic coordinate system may also be determined based on the directions represented by the coordinate axes in the geographic coordinate system and the included angles. For example, if the positive X-axis direction represents the positive east direction in the geographic coordinate system, the vehicle heading vector makes an angle α of-30 ° with the positive X-axis direction, and since the top view angle, the positive east direction is rotated 30 ° counterclockwise by 30 ° to 30 ° east, it can be determined that the vehicle is heading 30 ° east in the geographic coordinate system.
The vehicle orientation obtaining method provided by the embodiment of the application is described above. The method comprises the steps of obtaining coordinate values of any two pixels on one long side of a rectangle; determining vehicle orientation vectors respectively taking the two pixels as a starting point and an end point according to the direction information of the lane; and acquiring an included angle between the orientation vector of the vehicle and a coordinate axis by using an inverse trigonometric function according to the coordinate values of the two pixels, and determining the orientation of the vehicle. Meanwhile, the method also carries out coordinate conversion on the orientation of the vehicle to obtain the actual orientation of the vehicle in the geographic coordinate system. The method accurately identifies the orientation of the vehicle, can identify the orientation even if the vehicle is in an inclined pose in the image, and correspondingly obtains the orientation in the actual geographic coordinate system. Furthermore, the method is wide in application range, and applicability is not limited by the position and the posture of the vehicle in the image.
It is understood that, with the rectangle representing the vehicle obtained by the foregoing embodiment, the coordinate values of the pixels of the points constituting the sides of the rectangle can be known. Vehicle positioning is also an important part of intelligent driving and vehicle routing. In the embodiment, the centroid coordinate value of the vehicle can be correspondingly obtained by using the rectangle representing the vehicle.
For example, referring to FIG. 5, a rectangle includes 4 vertices A, B, C and D. Wherein, A and C are diagonal vertexes, B and D are diagonal vertexes. In this embodiment, coordinate values of at least two diagonal vertices of the 4 vertices may be specifically obtained, and the centroid coordinate value of the rectangle is determined according to the obtained coordinate values.
For example, the coordinate value P of point A is acquiredA(XA,YA) And coordinate value P of diagonal vertex C of point AC(XC,YC) The centroid coordinate value is obtained as [ (X)A+XC)/2,(YA+YC)/2]. Further, in the present embodiment, the coordinate value of the rectangular centroid may be used as the coordinate value of the vehicle centroid, and this may be used as the position of the vehicle.
In addition, it can be understood that, in the present embodiment, the accurate position of the vehicle in the actual geographic coordinate system can also be obtained through a coordinate transformation manner.
It will be appreciated that the orientation of the vehicle obtained in this embodiment may have a variety of applications. For example, the driving path planning of the vehicle is realized, the driving condition of the vehicle is simulated, or the traffic condition analysis and prompt are performed. In the present embodiment, the manner of applying the acquired orientation of the vehicle is not particularly limited.
Based on the foregoing embodiment, correspondingly, the present application further provides a vehicle orientation obtaining apparatus. The device is described in detail below with reference to embodiments and the accompanying drawings.
Third embodiment
Referring to fig. 8, the figure is a schematic structural diagram of a vehicle orientation obtaining apparatus according to an embodiment of the present application.
As shown in fig. 8, the vehicle orientation acquiring apparatus according to the present embodiment includes: an image acquisition module 801, an image segmentation module 802, a rectangle acquisition module 803, and a vehicle orientation determination module 804.
The image acquisition module 801 is configured to acquire an overhead view image of a vehicle, where the overhead view image includes information about a direction of a lane where the vehicle is located;
an image segmentation module 802, configured to perform segmentation processing on the overhead view image by using a deep neural network to obtain a processed image, where the processed image includes a closed region representing the vehicle;
a rectangle obtaining module 803, configured to obtain a rectangle according to the closed region;
a vehicle orientation determining module 804, configured to determine an orientation of the vehicle by using the orientation information of the rectangle and the lane.
The above is the vehicle orientation acquiring apparatus provided in the present embodiment. When the device utilizes the deep neural network to segment and process the image, the device is not influenced by the pose of the vehicle in the image, and even if the vehicle is not in the horizontal or vertical pose in the image, the device can segment and obtain the closed area representing the vehicle. In addition, the direction information of the lane is combined, the vehicle represented by the rectangle is matched with the direction of the lane in the actual road condition, and the accuracy of the obtained vehicle direction is effectively improved.
As a possible implementation manner, the rectangle obtaining module 803 specifically includes:
a coordinate value first acquisition unit configured to acquire coordinate values of edge pixels of the closed region;
and the rectangle acquisition unit is used for acquiring the minimum circumscribed rectangle or the maximum inscribed rectangle of the closed area according to the coordinate values of the edge pixels.
As a possible implementation manner, the vehicle orientation determining module 804 specifically includes:
a second coordinate value acquisition unit configured to acquire coordinate values of any two pixels on one long side of the rectangle;
the vehicle orientation vector determining unit is used for determining vehicle orientation vectors which respectively take the two pixels as a starting point and an end point according to the direction information of the lane;
and the vehicle orientation determining unit is used for acquiring an included angle between the vehicle orientation vector and the coordinate axis by using an inverse trigonometric function according to the coordinate values of the two pixels, and determining the orientation of the vehicle.
As a possible implementation manner, the apparatus further includes:
and the vehicle geographic orientation determining module is used for determining the orientation of the vehicle in a geographic coordinate system according to the direction represented by the coordinate axis in the geographic coordinate system and the included angle.
As a possible implementation manner, the apparatus further includes:
the vertex coordinate value acquisition module is used for acquiring coordinate values of a plurality of vertexes of the rectangle; at least two diagonal vertices are included in the plurality of vertices;
the mass center coordinate value calculation module is used for obtaining the coordinate value of the mass center of the vehicle according to the coordinate values of the plurality of vertexes;
and the vehicle geographic position acquisition module is used for carrying out coordinate transformation on the coordinate value of the centroid to acquire the position of the vehicle in a geographic coordinate system.
As a possible implementation manner, the apparatus further includes:
and the vehicle running condition simulation module is used for simulating the running condition of the vehicle according to the orientation of the vehicle.
It should be noted that, in the present specification, all the embodiments are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts suggested as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A vehicle orientation acquisition method characterized by comprising:
acquiring an overhead view image of a vehicle, wherein the overhead view image contains the direction information of a lane where the vehicle is located;
utilizing a depth neural network to perform segmentation processing on the overlook image to obtain a processed image, wherein the processed image comprises a closed area representing the vehicle;
obtaining a rectangle according to the closed area;
determining the orientation of the vehicle using the orientation information of the rectangle and the lane.
2. The vehicle orientation acquisition method according to claim 1, wherein the obtaining a rectangle according to the closed region specifically includes:
obtaining coordinate values of edge pixels of the closed area;
and obtaining the minimum circumscribed rectangle or the maximum inscribed rectangle of the closed region according to the coordinate values of the edge pixels.
3. The vehicle orientation acquisition method according to claim 1, wherein the determining the orientation of the vehicle using the orientation information of the rectangle and the lane specifically includes:
obtaining coordinate values of any two pixels on one long side of the rectangle;
determining vehicle orientation vectors respectively taking the two pixels as a starting point and an end point according to the direction information of the lane;
and acquiring an included angle between the orientation vector of the vehicle and a coordinate axis by using an inverse trigonometric function according to the coordinate values of the two pixels, and determining the orientation of the vehicle.
4. The vehicle orientation acquisition method according to claim 3, characterized by further comprising:
and determining the orientation of the vehicle in the geographic coordinate system according to the direction represented by the coordinate axis in the geographic coordinate system and the included angle.
5. The vehicle orientation acquisition method according to claim 1, wherein after the obtaining of one rectangle from the closed region, the method further comprises:
obtaining coordinate values of a plurality of vertexes of the rectangle; at least two diagonal vertices are included in the plurality of vertices;
obtaining coordinate values of the center of mass of the vehicle according to the coordinate values of the plurality of vertexes;
and carrying out coordinate transformation on the coordinate value of the centroid to obtain the position of the vehicle in a geographic coordinate system.
6. The vehicle orientation acquisition method according to any one of claims 1 to 5, characterized in that, after the determining the orientation of the vehicle, the method further comprises:
and simulating the running condition of the vehicle according to the orientation of the vehicle.
7. A vehicle orientation acquisition apparatus, characterized by comprising:
the system comprises an image acquisition module, a display module and a control module, wherein the image acquisition module is used for acquiring an overhead view image of a vehicle, and the overhead view image contains the direction information of a lane where the vehicle is located;
the image segmentation module is used for carrying out segmentation processing on the overlook image by utilizing a depth neural network to obtain a processed image, and the processed image comprises a closed area representing the vehicle;
the rectangle obtaining module is used for obtaining a rectangle according to the closed area;
and the vehicle orientation determining module is used for determining the orientation of the vehicle by utilizing the direction information of the rectangle and the lane.
8. The vehicle orientation acquisition device according to claim 7, wherein the rectangular acquisition module specifically includes:
a coordinate value first acquisition unit configured to acquire coordinate values of edge pixels of the closed region;
and the rectangle acquisition unit is used for acquiring the minimum circumscribed rectangle or the maximum inscribed rectangle of the closed area according to the coordinate values of the edge pixels.
9. The vehicle orientation acquisition apparatus according to claim 7, wherein the vehicle orientation determination module specifically includes:
a second coordinate value acquisition unit configured to acquire coordinate values of any two pixels on one long side of the rectangle;
the vehicle orientation vector determining unit is used for determining vehicle orientation vectors which respectively take the two pixels as a starting point and an end point according to the direction information of the lane;
and the vehicle orientation determining unit is used for acquiring an included angle between the vehicle orientation vector and the coordinate axis by using an inverse trigonometric function according to the coordinate values of the two pixels, and determining the orientation of the vehicle.
10. The vehicle orientation acquisition apparatus according to claim 9, characterized in that the apparatus further comprises:
and the vehicle geographic orientation determining module is used for determining the orientation of the vehicle in a geographic coordinate system according to the direction represented by the coordinate axis in the geographic coordinate system and the included angle.
11. The vehicle orientation acquisition apparatus according to claim 7, characterized in that the apparatus further comprises:
the vertex coordinate value acquisition module is used for acquiring coordinate values of a plurality of vertexes of the rectangle; at least two diagonal vertices are included in the plurality of vertices;
the mass center coordinate value calculation module is used for obtaining the coordinate value of the mass center of the vehicle according to the coordinate values of the plurality of vertexes;
and the vehicle geographic position acquisition module is used for carrying out coordinate transformation on the coordinate value of the centroid to acquire the position of the vehicle in a geographic coordinate system.
12. The vehicle orientation acquisition apparatus according to any one of claims 7 to 11, characterized in that the apparatus further comprises:
and the vehicle running condition simulation module is used for simulating the running condition of the vehicle according to the orientation of the vehicle.
CN201811626175.8A 2018-12-28 2018-12-28 Vehicle orientation obtaining method and related device Active CN109815831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811626175.8A CN109815831B (en) 2018-12-28 2018-12-28 Vehicle orientation obtaining method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811626175.8A CN109815831B (en) 2018-12-28 2018-12-28 Vehicle orientation obtaining method and related device

Publications (2)

Publication Number Publication Date
CN109815831A CN109815831A (en) 2019-05-28
CN109815831B true CN109815831B (en) 2021-03-23

Family

ID=66602700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811626175.8A Active CN109815831B (en) 2018-12-28 2018-12-28 Vehicle orientation obtaining method and related device

Country Status (1)

Country Link
CN (1) CN109815831B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017239B (en) * 2019-05-31 2022-12-20 北京市商汤科技开发有限公司 Method for determining orientation of target object, intelligent driving control method, device and equipment
CN111081033B (en) * 2019-11-21 2021-06-01 北京百度网讯科技有限公司 Method and device for determining orientation angle of vehicle
CN111413969B (en) * 2020-03-18 2023-07-28 东软睿驰汽车技术(沈阳)有限公司 Reversing control method and device, electronic equipment and storage medium
CN111401457A (en) * 2020-03-23 2020-07-10 东软睿驰汽车技术(沈阳)有限公司 Method, device and equipment for determining object information and storage medium
CN111723723A (en) * 2020-06-16 2020-09-29 东软睿驰汽车技术(沈阳)有限公司 Image detection method and device
CN112329722B (en) * 2020-11-26 2021-09-28 上海西井信息科技有限公司 Driving direction detection method, system, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103192829A (en) * 2013-03-22 2013-07-10 上海交通大学 Lane departure warning method and lane departure warning device based on around view
CN106874863A (en) * 2017-01-24 2017-06-20 南京大学 Vehicle based on depth convolutional neural networks is disobeyed and stops detection method of driving in the wrong direction
CN108831161A (en) * 2018-06-27 2018-11-16 深圳大学 A kind of traffic flow monitoring method, intelligence system and data set based on unmanned plane

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9916508B2 (en) * 2015-03-12 2018-03-13 Toyota Jidosha Kabushiki Kaisha Detecting roadway objects in real-time images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103192829A (en) * 2013-03-22 2013-07-10 上海交通大学 Lane departure warning method and lane departure warning device based on around view
CN106874863A (en) * 2017-01-24 2017-06-20 南京大学 Vehicle based on depth convolutional neural networks is disobeyed and stops detection method of driving in the wrong direction
CN108831161A (en) * 2018-06-27 2018-11-16 深圳大学 A kind of traffic flow monitoring method, intelligence system and data set based on unmanned plane

Also Published As

Publication number Publication date
CN109815831A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN109815831B (en) Vehicle orientation obtaining method and related device
US11900627B2 (en) Image annotation
EP3407294B1 (en) Information processing method, device, and terminal
US10867189B2 (en) Systems and methods for lane-marker detection
CN111220993B (en) Target scene positioning method and device, computer equipment and storage medium
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
Song et al. Dynamic calibration of pan–tilt–zoom cameras for traffic monitoring
CN111261016B (en) Road map construction method and device and electronic equipment
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN113902047B (en) Image element matching method, device, equipment and storage medium
Sun et al. Complex building roof detection and strict description from LIDAR data and orthorectified aerial imagery
Yuan et al. Estimation of vehicle pose and position with monocular camera at urban road intersections
Oniga et al. A fast ransac based approach for computing the orientation of obstacles in traffic scenes
CN113554705B (en) Laser radar robust positioning method under changing scene
CN114646317A (en) Vehicle visual positioning navigation control method and device, computer equipment and medium
CN111860084B (en) Image feature matching and positioning method and device and positioning system
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
Ayadi et al. A parametric algorithm for skyline extraction
Han et al. Uav vision: Feature based accurate ground target localization through propagated initializations and interframe homographies
Kazama et al. Estimation of Ego-Vehicle’s Position Based on Image Registration
Li et al. Lane detection and road surface reconstruction based on multiple vanishing point & symposia
Yao et al. A global and local condensation for lane tracking
CN115775325B (en) Pose determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant