CN111968071B - Method, device, equipment and storage medium for generating spatial position of vehicle - Google Patents

Method, device, equipment and storage medium for generating spatial position of vehicle

Info

Publication number
CN111968071B
CN111968071B CN202010605263.0A CN202010605263A CN111968071B CN 111968071 B CN111968071 B CN 111968071B CN 202010605263 A CN202010605263 A CN 202010605263A CN 111968071 B CN111968071 B CN 111968071B
Authority
CN
China
Prior art keywords
vehicle
bounding box
orientation angle
value
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010605263.0A
Other languages
Chinese (zh)
Other versions
CN111968071A (en
Inventor
谭啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chinasoft Oritech Informaiton Technology Co ltd
Original Assignee
Beijing Chinasoft Oritech Informaiton Technology Co ltd
Filing date
Publication date
Application filed by Beijing Chinasoft Oritech Informaiton Technology Co ltd filed Critical Beijing Chinasoft Oritech Informaiton Technology Co ltd
Priority to CN202010605263.0A priority Critical patent/CN111968071B/en
Publication of CN111968071A publication Critical patent/CN111968071A/en
Application granted granted Critical
Publication of CN111968071B publication Critical patent/CN111968071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides a method, a device, equipment and a storage medium for generating a spatial position of a vehicle, which relate to the technical field of computer vision and intelligent transportation, and specifically comprise the following steps: generating bounding box projection lines of a 2D bounding box of the vehicle according to the image of the vehicle; acquiring size information of a vehicle according to the image; establishing a 2D coordinate system of the vehicle relative to the ground plane according to the projection line of the bounding box; optimizing the center point position and the orientation angle of the lower plane of the vehicle under a 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value; and generating a spatial position of the vehicle according to the coordinate optimized value, the orientation angle optimized value and the size information of the vehicle. According to the application, the accuracy of the spatial position estimation of the vehicle can be improved, and the reliability of the application based on the spatial position of the vehicle can be further improved.

Description

Method, device, equipment and storage medium for generating spatial position of vehicle
Technical Field
The application relates to the technical field of image processing, in particular to the technical field of computer vision and intelligent transportation, and provides a method, a device, equipment and a storage medium for generating a spatial position of a vehicle.
Background
Vehicle 3D localization refers to estimating the spatial position of a vehicle, including estimating vehicle size, position, direction. The space position of the vehicle is very important for intelligent traffic, and important road condition information can be provided for the unmanned system so as to assist the unmanned vehicle in path planning and improve the safety of the unmanned system; the traffic flow statistics system can be used for counting traffic flow conditions in intersections, providing basis for signal control strategy planning of the intelligent signal lamp system and improving traffic efficiency.
Currently, in generating a spatial position of a vehicle from a vehicle image, accuracy of the spatial position of the vehicle is to be improved due to the influence of perspective effect.
Disclosure of Invention
The present application aims to solve at least one of the technical problems in the related art to some extent.
To this end, the application provides a method, a device, equipment and a storage medium for generating a spatial position of a vehicle.
An embodiment of a first aspect of the present application provides a method for generating a spatial position of a vehicle, including:
generating bounding box projection lines of a 2D bounding box of a vehicle according to an image of the vehicle;
acquiring size information of the vehicle according to the image;
Establishing a 2D coordinate system of the vehicle relative to the ground plane according to the bounding box projection line;
Optimizing a center point position and an orientation angle of a lower plane of the vehicle under the 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value; and
And generating the spatial position of the vehicle according to the coordinate optimized value, the orientation angle optimized value and the size information of the vehicle.
An embodiment of a second aspect of the present application provides a spatial position generating device for a vehicle, including:
A first generation module for generating bounding box projection lines of a 2D bounding box of a vehicle according to an image of the vehicle;
the acquisition module is used for acquiring the size information of the vehicle according to the image;
the building module is used for building a 2D coordinate system of the vehicle relative to the ground plane according to the bounding box projection line;
an optimization module for optimizing a center point position and an orientation angle of a lower plane of the vehicle in the 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value; and
And the second generation module is used for generating the spatial position of the vehicle according to the coordinate optimization value, the orientation angle optimization value and the size information of the vehicle.
An embodiment of a third aspect of the present application provides an electronic device, including at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of generating a spatial location of a vehicle as described in an embodiment of the first aspect.
An embodiment of a fourth aspect of the present application proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method of generating a spatial position of a vehicle according to the embodiment of the first aspect.
An embodiment of a fifth aspect of the application proposes a computer program product comprising a computer program which, when executed by a processor, implements a method for generating a spatial position of a vehicle according to an embodiment of the first aspect.
One embodiment of the above application has the following advantages or benefits: since the bounding box projection line of the 2D bounding box of the vehicle is generated according to the image of the vehicle, the 2D coordinate system of the vehicle relative to the ground plane is established according to the bounding box projection line, and the center point position and the orientation angle of the lower plane of the vehicle are optimized under the 2D coordinate system to generate the coordinate optimized value and the orientation angle optimized value, the more accurate center point and orientation angle of the bottom of the vehicle can be obtained, the accuracy of the spatial position of the vehicle is improved, and further, the reliability of the application based on the spatial position of the vehicle can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
Fig. 1 is a schematic flow chart of a method for generating a spatial position of a vehicle according to an embodiment of the present application;
fig. 2 is a flowchart of another method for generating a spatial position of a vehicle according to an embodiment of the present application;
FIG. 3 is a flowchart of another method for generating a spatial position of a vehicle according to an embodiment of the present application;
Fig. 4 is a flowchart of another method for generating a spatial position of a vehicle according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of a spatial position generating device of a vehicle according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of another spatial position generating device of a vehicle according to an embodiment of the present application;
fig. 7 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flow chart of a method for generating a spatial position of a vehicle according to an embodiment of the present application, as shown in fig. 1, the method includes:
Step 101, generating a bounding box projection line of a 2D bounding box of the vehicle according to an image of the vehicle.
The method for generating the spatial position of the vehicle, which is disclosed by the embodiment of the application, can be applied to estimating the position and the gesture of the vehicle.
In this embodiment, an image of a vehicle is acquired by an image acquisition device, and a 2D bounding box of the vehicle in the image is acquired according to the image of the vehicle, and then, projection is performed according to the 2D bounding box, so as to generate bounding box projection lines of the 2D bounding box. The image acquisition device is, for example, a camera, the 2D bounding box of the vehicle is used for representing the area where the vehicle is located in the image, and the bounding box projection line is obtained according to the side projection of the 2D bounding box of the vehicle. The shape of the 2D bounding box in this embodiment may be rectangular, or may be other polygons, which is not limited herein.
In one embodiment of the application, generating bounding box projection lines of a 2D bounding box of a vehicle from an image of the vehicle includes: the method includes detecting a vehicle in an image and generating a 2D bounding box position of the vehicle, projecting the 2D bounding box position of the vehicle to a ground plane to generate bounding box projection lines. Alternatively, the vehicle in the image may be detected by means of object detection to generate a 2D bounding box position of the vehicle in the image.
It should be noted that the implementation manner of generating the bounding box projection line of the 2D bounding box is merely an example, and is not limited herein.
Step 102, acquiring size information of the vehicle according to the image.
In this embodiment, in order to generate the spatial position of the vehicle, the size information of the vehicle may also be acquired from the image. The implementation manner of acquiring the vehicle size information may be implemented in various manners, and as an example, after acquiring the image of the vehicle, the image of the vehicle may be processed through a deep neural network to acquire the vehicle size information.
The size information includes, for example, the length, width and height of the vehicle.
Step 103, a 2D coordinate system of the vehicle relative to the ground plane is established according to the projection line of the bounding box.
In this embodiment, after the bounding box projection line is acquired, the bounding box projection line is taken as a coordinate axis u, and a coordinate axis v is established perpendicular to the bounding box projection line, so that a u-v coordinate system is established, and the u-v coordinate system is used as a 2D coordinate system of the vehicle on the ground plane.
Optionally, when there are a plurality of vehicles, one 2D coordinate system is established for each vehicle, respectively.
Step 104, optimizing the center point position and the orientation angle of the lower plane of the vehicle under the 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value.
In this embodiment, the image of the vehicle may be identified to obtain the orientation angle of the vehicle, and obtain the center point position of the lower plane of the vehicle, where the lower plane of the vehicle is, for example, the plane where the bottom of the vehicle is located. Optionally, the acquired orientation angle and center point position are predicted values. The position and orientation angle of the lower plane of the vehicle are optimized by establishing a loss function to generate a coordinate optimized value and an orientation angle optimized value, as an example.
The coordinate optimization value is a position of a central point of a lower plane of the optimized vehicle, the coordinate optimization value is, for example, a coordinate in the 2D coordinate system, and the orientation angle optimization value is an optimized vehicle orientation angle.
Step 105, generating the spatial position of the vehicle according to the coordinate optimized value, the orientation angle optimized value and the size information of the vehicle.
In this embodiment, after the coordinate optimized value, the orientation angle optimized value, and the size information of the vehicle are obtained, the spatial position of the vehicle may be generated in combination with the 2D coordinate system of the vehicle with respect to the ground plane. As an example, (coordinate optimized value, orientation angle optimized value) is (u b,vb,ryb), length, width and height of the vehicle are l e,we,he respectively, and the spatial position of the vehicle can be obtained by solving the combination of the 2D coordinate system (u-v).
According to the method for generating the spatial position of the vehicle, the bounding box projection line of the 2D bounding box of the vehicle is generated according to the image of the vehicle; acquiring size information of a vehicle according to the image; establishing a 2D coordinate system of the vehicle relative to the ground plane according to the projection line of the bounding box; optimizing the center point position and the orientation angle of the lower plane of the vehicle under a 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value; and generating a spatial position of the vehicle according to the coordinate optimized value, the orientation angle optimized value and the size information of the vehicle. According to the application, the position and the orientation angle of the central point of the lower plane of the vehicle are optimized under the 2D coordinate system to generate the coordinate optimized value and the orientation angle optimized value, so that the more accurate central point and orientation angle of the bottom of the vehicle can be obtained, and the accuracy of the spatial position of the vehicle is improved. Furthermore, the spatial position of the vehicle is applied to intelligent traffic applications such as vehicle-road coordination and unmanned operation, more accurate road condition information can be provided for the application, and the reliability of the application based on the spatial position of the vehicle is improved.
Based on the above embodiment, the present application can detect the vehicle in the image by means of object detection to generate the 2D bounding box position of the vehicle in the image. The foregoing embodiments will be further described with respect to generating bounding box projection lines of a 2D bounding box of a vehicle from an image of the vehicle and establishing a 2D coordinate system of the vehicle with respect to a ground plane from the bounding box projection lines.
Fig. 2 is a flow chart of another method for generating a spatial position of a vehicle according to an embodiment of the present application, as shown in fig. 2, the method includes:
Step 201, detecting a vehicle in the image and generating a 2D bounding box position of the vehicle.
In this embodiment, the vehicle in the image may be detected by means of object detection in computer vision, so as to generate the 2D bounding box position of the vehicle in the image.
As an example, a sample image containing a vehicle is collected in advance, a label frame of the vehicle is marked in the sample image, an object detection model is trained by the sample image, the object detection model is input as an image, and the image is output as a detection frame of the vehicle in the image. Further, the acquired vehicle image is input into an object detection model with trained values, and a 2D bounding box position of the vehicle is generated.
Step 202, a 2D bounding box position of a vehicle is projected to a ground plane to generate bounding box projection lines.
The generation of bounding box projection lines in this embodiment is exemplified below.
As one example, ground equations and camera parameters are obtained, and a 2D bounding box position of the vehicle is projected to a ground plane in accordance with the ground equations and camera parameters to generate bounding box projection lines. Optionally, the 2D bounding box is a rectangular box, and then the left and right lower boundaries of the 2D bounding box are respectively projected to the ground plane to generate bounding box projection lines. The ground equation refers to a ground equation of a ground plane in a scene under a camera coordinate system, and camera parameters comprise internal parameters of a camera.
In this example, the camera calibration technique may be used to obtain the internal parameters of the camera, and the ground equation may be obtained according to the ground modeling in combination with the external parameter calibration technique. The projection of each point in the image to the ground plane can be realized by acquiring the camera parameters and the ground equation, and then, the object detection method is adopted to detect the vehicle in the image, the 2D bounding box position of the vehicle is generated, and the 2D bounding box position of the vehicle is projected to the ground plane according to the ground equation and the camera parameters so as to generate bounding box projection lines.
It should be noted that, one ground equation may be used for one scene, or modeling may be performed using different ground equations at different locations, which is not limited herein.
Step 203, a 2D coordinate system of the vehicle relative to the ground plane is established according to the bounding box projection line.
The establishment of the 2D coordinate system in this embodiment is exemplified below.
As an example, the 2D bounding box is a rectangular box, and the number of bounding box projection lines is three, that is, the bounding box projection lines include a first bounding box projection line l, a second bounding box projection line r and a third bounding box projection line b, wherein the first bounding box projection line l and the second bounding box projection line r are respectively obtained according to the left and right boundary projections of the 2D bounding box rectangular box, and the third bounding box projection line b is obtained according to the lower boundary projection of the 2D bounding box rectangular box.
In this example, the coordinate axis u is established along the third bounding box hatching line b, and the coordinate axis v is established perpendicular to u on the ground plane equation. Optionally, a coordinate axis v is established on one side perpendicular to u on the ground plane equation and forming an acute angle with the line of sight, and for an origin of the coordinate system, a projection point of a lower left vertex of the 2D bounding box on the ground plane is taken as the origin of the coordinate system, wherein the origin of the coordinate system can be calculated according to the lower left vertex, the camera internal reference and the ground equation.
In the embodiment of the application, the lower side and the left side and the right side of the 2D bounding box are obtained to project the bounding box projection lines onto the ground plane, a 2D coordinate system of the vehicle relative to the ground plane is established according to the bounding box projection lines, and the connection between the 2D bounding box and the ground equation is established, so that support is provided for loss function optimization. And, sampling based on the 2D coordinate system is more in line with the physical meaning of the scene.
Further, in this embodiment, the center point sampling value and the orientation angle sampling value may be obtained, and the bottom center point sampling value and the orientation angle sampling value may be optimized according to the loss function, and the generation of the coordinate optimized value and the orientation angle optimized value in the foregoing embodiment will be described below.
Fig. 3 is a flowchart of another method for generating a spatial position of a vehicle according to an embodiment of the present application, as shown in fig. 3, the method includes:
Step 301, an orientation angle predicted value of the vehicle is obtained according to the image.
In this embodiment, when generating the optimal value of the direction angle of the vehicle, the direction angle predicted value of the vehicle is obtained according to the image, for example, the direction angle predicted value ry e of the vehicle may be obtained according to the image of the vehicle through the deep neural network.
In step 302, a bottom center point of the vehicle in the 2D coordinate system is acquired, and sampling is performed in the search space according to the bottom center point and the orientation angle predicted value to generate a bottom center point sampling value and an orientation angle sampling value.
The bottom surface center point refers to the center point of the plane where the vehicle bottom is located, and the bottom surface center point is in a 2D coordinate system.
In this embodiment, the search space refers to a sampling range, and the search space may be defined by (u min,umax)*(vmin,vmax)*(rymin,rymax), where u min,umax,vmin,vmax,rymin,rymax refers to a 2D coordinate u minimum, u maximum, v minimum, v maximum, orientation angle minimum, and orientation angle maximum, respectively, and may be specifically determined according to needs.
In this embodiment, the center point position of the vehicle bottom surface is sampled in the 2D coordinate system (u-v), and the vehicle orientation angle is sampled, alternatively, N values may be sampled in the search space, where N is a positive integer, and the sampled value (u t,vt,ryt) is used as the bottom surface center point sampling value and the orientation angle sampling value. By sampling under the constraint of the ground equation, the method is more in line with the physical meaning of the application scene.
And step 303, optimizing the bottom surface center point sampling value and the orientation angle sampling value according to the loss function to generate a coordinate optimized value and an orientation angle optimized value.
In this embodiment, a loss function is pre-established, and for each sampled value of the N sampled values, a loss value is calculated by the loss function, and a sampling value whose loss value satisfies a preset condition is selected as the coordinate optimization value and the orientation angle optimization value. Wherein the loss function takes the distance between the vertex of the 3D bounding box and the projection line of the bounding box as a constraint condition.
As an example, the number of bounding box projection lines is three, and the loss function is as follows:
Cost(ut,vt,ryt)=dist(pl,linel)+dist(pr,liner)+dist(pb,lineb)+(rye-ryt)2
And for each sampling value, determining 4 vertexes of the bottom surface of the vehicle 3D surrounding frame body according to the sampling value and the length and width of the vehicle, and further combining with the camera internal parameters to project the 4 vertexes of the bottom surface of the vehicle 3D surrounding frame body back into the image of the vehicle to obtain four projection points. The projection points closest to the left, right and lower boundaries of the 2D bounding box are determined in the image of the vehicle and are denoted p l,pr,pb on the ground plane equation, respectively.
As a possible implementation manner, for the bottom surface center point sampling value and the orientation angle sampling value, calculating a loss value, determining a minimum loss value from loss values corresponding to a plurality of sampling values, and taking the sampling value corresponding to the minimum loss value as an optimized value, namely determining a coordinate optimized value and an orientation angle optimized value through the following formula: (u b,vb,ryb)=argmin(Cost(ut,vt,ryt)), wherein (u b,vb,ryb) is a coordinate optimized value and an orientation angle optimized value.
In one embodiment of the present application, after (u b,vb,ryb) is obtained, the search space may be reduced by using (u b,vb,ryb) as a sampling center, and then a next sampling iteration may be performed to obtain a new (u b,vb,ryb). The iterative process is repeatedly carried out until the preset iterative times are met, and the finally obtained (u b,vb,ryb) is recorded as a coordinate optimization value and an orientation angle optimization value.
According to the embodiment of the application, the vehicle orientation angle predicted value is acquired according to the image, the sampling is carried out in the search space to generate the bottom surface center point sampling value and the orientation angle sampling value, the bottom surface center point sampling value and the orientation angle sampling value are optimized according to the loss function to generate the coordinate optimized value and the orientation angle optimized value, and because in the application of the visual perception technology, the 2D bounding box relative estimation accuracy is higher, the accuracy of the vehicle bottom center point and the orientation angle can be improved by introducing the 2D bounding box to carry out constraint, the problem that the optimization effect is poor due to the near-far size caused by the perspective effect is solved, and compared with the scheme that eight vertexes of the 3D bounding box are projected back into the 2D image, the minimum 2D bounding box limited by eight projection points is calculated, and the cross-merging ratio between the 2D bounding box output by the detector is improved, and the accuracy of the vehicle space position is improved.
Fig. 4 is a flowchart of another method for generating a spatial position of a vehicle according to an embodiment of the present application.
As an example, referring to fig. 4, wherein a 2D image including a vehicle is acquired by a camera, a 2D bounding box of the vehicle in the image is acquired by performing object detection on the 2D image, and a longitudinal width height and an orientation angle pre-estimated value of the vehicle are acquired by a deep neural network; and determining camera internal parameters and ground equations through camera calibration and ground equation estimation. Further, bounding box projection lines are generated from the 2D bounding box, camera internal parameters, and ground equations, and a uv coordinate system is established over the ground plane. Further, the coordinates and the orientation angle of the bottom surface center point of the vehicle are sampled in a sampling range based on the uv coordinate system, sampling values are optimized through a loss function by combining projection lines of the bounding box, the length, the width and the orientation angle to generate a coordinate optimized value and an orientation angle optimized value of the bottom surface center point, and the optimization process is iterated to obtain a final coordinate optimized value and an orientation angle optimized value. And generating the space position of the vehicle according to the coordinate optimization value, the orientation angle optimization value, the length, width, height and uv coordinate system. Thus, the accuracy of the spatial position of the vehicle can be improved, and further, the reliability of the application based on the spatial position of the vehicle can be improved.
In order to achieve the above embodiment, the present application also proposes a spatial position generating device of a vehicle.
Fig. 5 is a schematic structural diagram of a spatial position generating device of a vehicle according to an embodiment of the present application, where, as shown in fig. 5, the device includes: the first generation module 50, the acquisition module 51, the establishment module 52, the optimization module 53 and the second generation module 54.
The first generation module 50 is configured to generate a bounding box projection line of a 2D bounding box of the vehicle according to an image of the vehicle.
An acquisition module 51 for acquiring size information of the vehicle from the image.
A building module 52 for building a 2D coordinate system of the vehicle relative to the ground plane from the bounding box projection line.
An optimization module 53 for optimizing the center point position and the orientation angle of the lower plane of the vehicle under the 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value.
A second generation module 54 is configured to generate a spatial position of the vehicle according to the coordinate optimization value and the orientation angle optimization value, and the size information of the vehicle.
In one embodiment of the present application, on the basis of fig. 5, as shown in fig. 6, the first generating module 50 includes: a detection unit 501 configured to detect a vehicle in an image and generate a 2D bounding box position of the vehicle; and a projection unit 502 configured to project the 2D bounding box position of the vehicle to a ground plane to generate a bounding box projection line.
In one embodiment of the present application, the projection unit 502 is specifically configured to: acquiring a ground equation and camera parameters; and projecting the 2D bounding box position of the vehicle to a ground plane according to the ground equation and camera parameters to generate a bounding box projection line.
In one embodiment of the present application, the bounding box projection lines include a first bounding box projection line l, a second bounding box projection line r, and a third bounding box projection line b, and the creation module 52 is specifically configured to: and establishing a coordinate axis u along the direction of the third surrounding frame hatching line b, and establishing a coordinate axis v on the ground plane equation perpendicular to the coordinate axis u.
In one embodiment of the present application, the optimization module 53 is specifically configured to: acquiring an orientation angle estimated value of the vehicle according to the image; acquiring a bottom surface center point of the vehicle in the 2D coordinate system, and sampling in a search space according to the bottom surface center point and the orientation angle predicted value to generate a bottom surface center point sampling value and an orientation angle sampling value; and optimizing the sampling value of the bottom surface center point and the sampling value of the orientation angle according to a loss function to generate the coordinate optimized value and the orientation angle optimized value.
The explanation of the spatial position generating method of the vehicle in the foregoing embodiment is equally applicable to the spatial position generating device of the vehicle in the present embodiment, and will not be repeated here.
According to the spatial position generation device of the vehicle, a bounding box projection line of a 2D bounding box of the vehicle is generated according to an image of the vehicle; acquiring size information of a vehicle according to the image; establishing a 2D coordinate system of the vehicle relative to the ground plane according to the projection line of the bounding box; optimizing the center point position and the orientation angle of the lower plane of the vehicle under a 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value; and generating a spatial position of the vehicle according to the coordinate optimized value, the orientation angle optimized value and the size information of the vehicle. According to the application, more accurate vehicle bottom center point and orientation angle can be obtained, and the accuracy of the spatial position of the vehicle is improved. Furthermore, the spatial position of the vehicle is applied to intelligent traffic applications such as vehicle-road coordination and unmanned operation, more accurate road condition information can be provided for the application, and the reliability of the application based on the spatial position of the vehicle is improved.
In order to implement the above-described embodiments, the present application also proposes a computer program product, which when executed by a processor implements a method for generating a spatial position of a vehicle according to any of the previous embodiments.
According to an embodiment of the present application, the present application also provides an electronic device and a readable storage medium.
As shown in fig. 7, there is a block diagram of an electronic device of a spatial position generation method of a vehicle according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 7, the electronic device includes: one or more processors 701, memory 702, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 701 is illustrated in fig. 7.
Memory 702 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for generating the spatial position of the vehicle provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the spatial position generation method of the vehicle provided by the present application.
The memory 702 is used as a non-transitory computer readable storage medium, and is used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for generating a spatial position of a vehicle in an embodiment of the present application (e.g., the first generating module 50, the acquiring module 51, the establishing module 52, the optimizing module 53, and the second generating module 54 shown in fig. 5). The processor 701 executes various functional applications of the server and data processing, that is, implements the spatial position generation method of the vehicle in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 702.
Memory 702 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory 702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 702 may optionally include memory located remotely from processor 701, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the vehicle spatial position generation method may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or otherwise, in fig. 7 by way of example.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and like input devices. The output device 704 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (10)

1. A spatial position generation method of a vehicle, comprising:
generating bounding box projection lines of a 2D bounding box of a vehicle according to an image of the vehicle;
acquiring size information of the vehicle according to the image;
establishing a 2D coordinate system of the vehicle relative to the ground plane according to the bounding box projection line;
Optimizing a center point position and an orientation angle of a lower plane of the vehicle under the 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value, comprising:
acquiring an orientation angle estimated value of the vehicle according to the image;
Acquiring a bottom surface center point of the vehicle in the 2D coordinate system, and sampling in a search space according to the bottom surface center point and the orientation angle predicted value to generate a bottom surface center point sampling value and an orientation angle sampling value;
optimizing the bottom surface center point sampling value and the orientation angle sampling value according to a loss function to generate the coordinate optimization value and the orientation angle optimization value; and
And generating the spatial position of the vehicle according to the coordinate optimized value, the orientation angle optimized value and the size information of the vehicle.
2. The spatial position generating method of a vehicle according to claim 1, wherein said generating a bounding box projection line of a 2D bounding box of the vehicle from an image of the vehicle comprises:
Detecting a vehicle in the image and generating a 2D bounding box position of the vehicle; and
The 2D bounding box position of the vehicle is projected to a ground plane to generate bounding box projection lines.
3. The spatial location generation method of a vehicle of claim 2, wherein the projecting the 2D bounding box location of the vehicle to a ground plane to generate bounding box projection lines comprises:
acquiring a ground equation and camera parameters;
And projecting the 2D bounding box position of the vehicle to a ground plane according to the ground equation and camera parameters to generate a bounding box projection line.
4. The spatial position generating method of a vehicle according to claim 1, wherein the bounding box projection line includes a first bounding box projection line l, a second bounding box projection line r, and a third bounding box projection line b, the creating a 2D coordinate system of the vehicle relative to the ground plane from the bounding box projection line includes:
And establishing a coordinate axis u along the direction of the third surrounding frame hatching line b, and establishing a coordinate axis v on the ground plane equation perpendicular to the coordinate axis u.
5. A spatial position generating device of a vehicle, comprising:
A first generation module for generating bounding box projection lines of a 2D bounding box of a vehicle according to an image of the vehicle;
the acquisition module is used for acquiring the size information of the vehicle according to the image;
The building module is used for building a 2D coordinate system of the vehicle relative to the ground plane according to the bounding box projection line;
an optimization module for optimizing a center point position and an orientation angle of a lower plane of the vehicle in the 2D coordinate system to generate a coordinate optimized value and an orientation angle optimized value, comprising:
acquiring an orientation angle estimated value of the vehicle according to the image;
Acquiring a bottom surface center point of the vehicle in the 2D coordinate system, and sampling in a search space according to the bottom surface center point and the orientation angle predicted value to generate a bottom surface center point sampling value and an orientation angle sampling value;
optimizing the bottom surface center point sampling value and the orientation angle sampling value according to a loss function to generate the coordinate optimization value and the orientation angle optimization value; and
And the second generation module is used for generating the spatial position of the vehicle according to the coordinate optimization value, the orientation angle optimization value and the size information of the vehicle.
6. The spatial position generating device of a vehicle according to claim 5, wherein said first generating module comprises:
The detection unit is used for detecting the vehicle in the image and generating a 2D bounding box position of the vehicle; and
And the projection unit is used for projecting the 2D bounding box position of the vehicle to the ground plane so as to generate a bounding box projection line.
7. The spatial position generating device of a vehicle according to claim 6, wherein said projection unit is specifically configured to:
acquiring a ground equation and camera parameters;
And projecting the 2D bounding box position of the vehicle to a ground plane according to the ground equation and camera parameters to generate a bounding box projection line.
8. The spatial position generating device of a vehicle according to claim 5, wherein said bounding box projection line includes a first bounding box projection line l, a second bounding box projection line r, and a third bounding box projection line b, said building module being specifically configured to:
And establishing a coordinate axis u along the direction of the third surrounding frame hatching line b, and establishing a coordinate axis v on the ground plane equation perpendicular to the coordinate axis u.
9. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of generating a spatial location of a vehicle of any one of claims 1-4.
10. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the spatial location generation method of the vehicle of any one of claims 1-4.
CN202010605263.0A 2020-06-29 Method, device, equipment and storage medium for generating spatial position of vehicle Active CN111968071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010605263.0A CN111968071B (en) 2020-06-29 Method, device, equipment and storage medium for generating spatial position of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605263.0A CN111968071B (en) 2020-06-29 Method, device, equipment and storage medium for generating spatial position of vehicle

Publications (2)

Publication Number Publication Date
CN111968071A CN111968071A (en) 2020-11-20
CN111968071B true CN111968071B (en) 2024-07-05

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109358622A (en) * 2018-10-12 2019-02-19 华北科技学院 Localization method, electronic equipment and the computer readable storage medium of robot label

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109358622A (en) * 2018-10-12 2019-02-19 华北科技学院 Localization method, electronic equipment and the computer readable storage medium of robot label

Similar Documents

Publication Publication Date Title
US11615605B2 (en) Vehicle information detection method, electronic device and storage medium
CN112150558B (en) Obstacle three-dimensional position acquisition method and device for road side computing equipment
CN111666876B (en) Method and device for detecting obstacle, electronic equipment and road side equipment
CN111612753B (en) Three-dimensional object detection method and device, electronic equipment and readable storage medium
CN111968229A (en) High-precision map making method and device
CN111784836B (en) High-precision map generation method, device, equipment and readable storage medium
CN111402161B (en) Denoising method, device, equipment and storage medium for point cloud obstacle
CN111368760B (en) Obstacle detection method and device, electronic equipment and storage medium
CN111784835B (en) Drawing method, drawing device, electronic equipment and readable storage medium
JP2023510198A (en) Method and apparatus for detecting vehicle attitude
JP2021101365A (en) Positioning method, positioning device, and electronic device
KR102643425B1 (en) A method, an apparatus an electronic device, a storage device, a roadside instrument, a cloud control platform and a program product for detecting vehicle's lane changing
CN111177869B (en) Method, device and equipment for determining sensor layout scheme
CN112241718B (en) Vehicle information detection method, detection model training method and device
CN111402326B (en) Obstacle detection method, obstacle detection device, unmanned vehicle and storage medium
KR102432561B1 (en) Edge-based three-dimensional tracking and registration method and apparatus for augmented reality, and electronic device
CN111652113A (en) Obstacle detection method, apparatus, device, and storage medium
CN111353466B (en) Lane line recognition processing method, equipment and storage medium
CN111597987B (en) Method, apparatus, device and storage medium for generating information
CN111767843A (en) Three-dimensional position prediction method, device, equipment and storage medium
CN111191619A (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN112528932B (en) Method and device for optimizing position information, road side equipment and cloud control platform
CN111814651B (en) Lane line generation method, device and equipment
CN113763458B (en) Method and device for determining placement surface of target object
CN111462072B (en) Point cloud picture quality detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240530

Address after: Room 1205, 12th Floor, Building 3, No. 28 Jingsheng South 1st Street, Beijing Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 101100

Applicant after: BEIJING CHINASOFT ORITECH INFORMAITON TECHNOLOGY CO.,LTD.

Country or region after: China

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant