CN111161202A - Vehicle behavior information acquisition method and device, computer equipment and storage medium - Google Patents

Vehicle behavior information acquisition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111161202A
CN111161202A CN201911403414.8A CN201911403414A CN111161202A CN 111161202 A CN111161202 A CN 111161202A CN 201911403414 A CN201911403414 A CN 201911403414A CN 111161202 A CN111161202 A CN 111161202A
Authority
CN
China
Prior art keywords
vehicle
scene
vehicle behavior
point cloud
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911403414.8A
Other languages
Chinese (zh)
Inventor
周康明
魏宇飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN201911403414.8A priority Critical patent/CN111161202A/en
Publication of CN111161202A publication Critical patent/CN111161202A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle behavior information acquisition method, a vehicle behavior information acquisition device, a computer device and a storage medium. The method comprises the following steps: acquiring scene depth information of a preset scene; the scene depth information is point cloud information obtained by detecting static objects in a scene by a laser radar; acquiring a vehicle behavior image; the vehicle behavior image is an image of a vehicle shot by a camera; generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features; and identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information. The method can improve the identification precision.

Description

Vehicle behavior information acquisition method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of vehicle monitoring technologies, and in particular, to a method and an apparatus for acquiring vehicle behavior information, a computer device, and a storage medium.
Background
With the development of science and technology, the automatic detection of the road vehicle behaviors by means of equipment greatly reduces the workload of people.
In general, road vehicle behavior detection mainly includes acquiring RGB (red, green, blue) images by a camera on a road, and judging the acquired RGB images through an algorithm. For example, image recognition is performed on the captured RGB image, thereby acquiring the position of the vehicle in the image. At present, in the detection of illegal behaviors of road vehicles, the adopted algorithm is identification based on a two-dimensional plane image.
However, the conventional method for acquiring the vehicle behavior by recognizing the two-dimensional plane image has low accuracy.
Disclosure of Invention
In view of the above, it is necessary to provide a vehicle behavior information acquisition method, apparatus, computer device, and storage medium capable of improving the vehicle behavior recognition accuracy in view of the above technical problems.
In a first aspect, an embodiment of the present application provides a vehicle behavior information method, where the method includes:
acquiring scene depth information of a preset scene; the scene depth information is point cloud information obtained by detecting static objects in a scene by a laser radar;
acquiring a vehicle behavior image; the vehicle behavior image is an image of a vehicle shot by a camera;
generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
In one embodiment, the generating a vehicle pseudo-point cloud map corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image includes:
generating a scene sparse depth map on a plane of the vehicle behavior image according to the scene depth information based on the camera internal parameters and the camera external parameters;
obtaining a scene dense depth map according to the scene sparse depth map and the vehicle behavior image;
and generating the vehicle pseudo-point cloud picture according to the scene dense depth picture based on the camera internal parameters and the camera external parameters.
In one embodiment, the obtaining a scene dense depth map according to the scene sparse depth map and the vehicle behavior image includes:
and inputting the scene sparse depth map and the vehicle behavior image into a preset depth completion network to obtain a scene dense depth map.
In one embodiment, the generating the vehicle pseudo-point cloud map from the scene dense depth map based on the camera internal parameters and the camera external parameters includes:
and projecting the scene dense depth map back to a three-dimensional space based on the camera internal parameters and the camera external parameters to generate the vehicle pseudo-point cloud map.
In one embodiment, the identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information includes:
inputting the vehicle pseudo-point cloud picture into a preset three-dimensional target detection model to obtain vehicle behavior information; wherein the vehicle behavior information includes at least one of a vehicle position, a vehicle size, and a vehicle direction.
In one embodiment, the obtaining of the scene depth information of the preset scene includes:
and detecting a preset scene by adopting a laser radar to obtain the scene depth information of the preset scene.
In one embodiment, the method further comprises:
and obtaining a vehicle behavior detection result according to the vehicle behavior information and a preset judgment rule.
In a second aspect, an embodiment of the present application provides a vehicle behavior information acquiring apparatus, including:
the acquiring device is used for acquiring scene depth information and a vehicle behavior image of a preset scene; the scene depth information is point cloud information obtained by detecting a static object in a scene by a laser radar, and the vehicle behavior image is an image of a vehicle obtained by shooting by a camera;
the processing module is used for generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and the identification module is used for identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
In a third aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring scene depth information of a preset scene; the scene depth information is point cloud information obtained by detecting static objects in a scene by a laser radar;
acquiring a vehicle behavior image; the vehicle behavior image is an image of a vehicle shot by a camera;
generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring scene depth information of a preset scene; the scene depth information is point cloud information obtained by detecting static objects in a scene by a laser radar;
acquiring a vehicle behavior image; the vehicle behavior image is an image of a vehicle shot by a camera;
generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
According to the vehicle behavior information acquisition method and device, the computer equipment and the storage medium, the computer equipment acquires scene depth information obtained by detecting a static object in a preset scene by using the laser radar, and acquires a vehicle behavior image obtained by shooting by using the camera. And then the computer equipment performs two-dimensional space fusion on the scene depth information and the vehicle behavior image based on camera internal parameters and camera external parameters of the camera, and then projects the scene depth information and the vehicle behavior image back to a three-dimensional space to obtain a vehicle pseudo-point cloud picture of the three-dimensional space representing the characteristics of the vehicle behavior image, so that the vehicle behavior image and the scene depth information are fused to obtain point cloud data fused with the vehicle behavior. According to the method, the point cloud data and the two-dimensional image of the three-dimensional space are fused based on the camera internal parameter and the camera external parameter, and are projected back to the three-dimensional space, so that the vehicle pseudo-point cloud picture of the three-dimensional space is obtained. Therefore, the computer equipment can obtain the vehicle behavior information based on the preset scene by identifying the three-dimensional vehicle pseudo-point cloud picture, and compared with the traditional method for identifying the two-dimensional plane image, the accuracy and precision are greatly improved, so that the subsequent processing accuracy and precision can be greatly improved, for example, the traffic condition monitoring, or the vehicle tracking, or the acquisition of the big data of the vehicle driving characteristics and the like aiming at the vehicle behavior information.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a schematic flow chart of a vehicle behavior information acquisition method according to an embodiment;
fig. 3 is a schematic flowchart of a vehicle behavior information acquisition method according to another embodiment;
fig. 4 is a schematic structural diagram of a vehicle behavior information acquisition device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The vehicle behavior information acquisition method provided by the embodiment of the application can be applied to the computer equipment shown in fig. 1. The computer device comprises a processor, a memory, a network interface, a database, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store the vehicle behavior images in the following embodiments, and the specific description on the vehicle behavior images is referred to in the following embodiments. The network interface of the computer device may be used to communicate with other devices outside over a network connection. Optionally, the computer device may be a server, a desktop, a personal digital assistant, other terminal devices such as a tablet computer, a mobile phone, and the like, or a cloud or a remote server, and the specific form of the computer device is not limited in the embodiment of the present application. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. Of course, the input device and the display screen may not belong to a part of the computer device, and may be external devices of the computer device.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
It should be noted that the execution subject of the method embodiments described below may be a vehicle behavior information obtaining device, which may be implemented as part of or all of the computer device described above by software, hardware, or a combination of software and hardware. The following method embodiments are described by taking the execution subject as the computer device as an example.
Fig. 2 is a schematic flowchart of a vehicle behavior information acquiring method according to an embodiment. The embodiment relates to a specific process for judging the vehicle behavior by the computer equipment. As shown in fig. 2, the method includes:
s10, obtaining scene depth information of a preset scene; the scene depth information is point cloud information obtained by detecting static objects in a scene by a laser radar.
Specifically, the computer device may read scene depth information of a preset scene stored in the memory, and may also receive scene depth information sent by other devices, or the scene depth information obtained by using laser radar detection, which is not limited in this embodiment. The preset scene is an environment where vehicle behavior observation is required, for example, a traffic sensitive zone such as an intersection, a t-junction, or a side of a building, or another environment where vehicle behavior observation is required. The scene depth information is point cloud information obtained by detecting static objects in the preset scene by the laser radar, the static objects can comprise fixed-position objects such as roads, buildings and road marks, the laser radar can obtain the scene depth information by one-time detection aiming at the same preset scene, the scene depth information can be used in the later use process, the laser radar does not need to be used again for detection, and therefore equipment cost can be saved and the method is more convenient.
S20, acquiring a vehicle behavior image; the vehicle behavior image is an image of the vehicle shot by a camera.
Specifically, the computer device may acquire the vehicle behavior image, and may read the vehicle behavior image stored in the memory, or may receive the vehicle behavior image captured by the camera. The image including the vehicle captured by the camera arranged on the vehicle behavior image road may be an image of the vehicle in the driving process or an image of the vehicle in a stopped state, and the image type is an RGB image, which is not limited in this embodiment.
S30, generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features.
Specifically, the computer device may project the scene depth information to a two-dimensional space according to the scene depth information and the vehicle behavior image to obtain an image of the two-dimensional space, and further perform fusion with the vehicle behavior image in a plane based on camera internal parameters and camera external parameters, such as pixel superposition or feature vector superposition, and then project the image back to the three-dimensional space to obtain a vehicle pseudo-point cloud image of the three-dimensional space. The vehicle behavior image and the scene depth information are fused to obtain the point cloud data fused with the vehicle behavior. Optionally, when the projection image pixel of the field depth information in the two-dimensional space is low, the two-dimensional image with low pixel needs to be complemented and then fused.
And S40, identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
Specifically, the computer device identifies the vehicle pseudo-point cloud picture to obtain vehicle behavior information, for example, performs feature extraction on the vehicle pseudo-point cloud picture to identify vehicle behavior information representing the vehicle behavior such as the shape and the position of the vehicle.
In this embodiment, the computer device obtains scene depth information obtained by detecting a stationary object in a preset scene by the laser radar, and obtains a vehicle behavior image obtained by shooting with the camera. And then the computer equipment performs two-dimensional space fusion on the scene depth information and the vehicle behavior image based on camera internal parameters and camera external parameters of the camera, and then projects the scene depth information and the vehicle behavior image back to a three-dimensional space to obtain a vehicle pseudo-point cloud picture of the three-dimensional space representing the characteristics of the vehicle behavior image, so that the vehicle behavior image and the scene depth information are fused to obtain point cloud data fused with the vehicle behavior. According to the method, the point cloud data and the two-dimensional image of the three-dimensional space are fused based on the camera internal parameter and the camera external parameter, and are projected back to the three-dimensional space, so that the vehicle pseudo-point cloud picture of the three-dimensional space is obtained. Therefore, the computer equipment can obtain the vehicle behavior information based on the preset scene by identifying the three-dimensional vehicle pseudo-point cloud picture, and compared with the traditional method for identifying the two-dimensional plane image, the accuracy and precision are greatly improved, so that the subsequent processing accuracy and precision can be greatly improved, for example, the traffic condition monitoring, or the vehicle tracking, or the acquisition of the big data of the vehicle driving characteristics and the like aiming at the vehicle behavior information.
Optionally, one possible implementation manner of the step S10 may include: the method comprises the steps of detecting a preset scene by adopting a laser radar to obtain scene depth information of the preset scene, wherein the scene depth information is three-dimensional point cloud data based on the preset scene, so that the vehicle behavior based on a three-dimensional space can be recognized, and the vehicle behavior recognition is more accurate.
Optionally, on the basis of the foregoing embodiment, a possible implementation manner of the foregoing step S30 may also be as shown in fig. 4, and includes:
and S31, generating a scene sparse depth map on the plane of the vehicle behavior image according to the scene depth information based on the camera internal parameters and the camera external parameters.
Specifically, the computer device projects the scene depth information on a plane where the vehicle behavior image is located, and the projection process is performed based on calibrated camera internal parameters and camera external parameters, so that a scene sparse depth map is obtained. Because the resolution ratio of the point cloud data acquired by the laser radar is far smaller than that of the vehicle behavior image, when the scene depth information is projected on the plane of the vehicle behavior image, many pixels have no corresponding depth values, and therefore 0 can be adopted to fill the pixels, so that a scene sparse depth map with the same size as the vehicle behavior image is obtained, the scene sparse depth map is a two-dimensional image with depth information, namely, each pixel point carries one depth value.
And S32, obtaining a scene dense depth map according to the scene sparse depth map and the vehicle behavior image.
Specifically, the computer device may fuse the scene washing depth map and the vehicle behavior image, for example, superimpose corresponding pixels of the scene sparse depth map and the vehicle behavior image, or superimpose vectors to obtain a fused image, so as to obtain a scene dense depth map. It should be noted that the scene dense depth map is an image obtained by fusing information of a preset scene and a vehicle behavior image, and the scene dense depth map is a two-dimensional image carrying depth information.
Optionally, one possible implementation manner of this step may include: and inputting the scene sparse depth map and the vehicle behavior image into a preset depth completion network to obtain a scene dense depth map. Specifically, the deep completion network is a neural network model, and is obtained through training. Therefore, the computer device may input the scene sparse depth map and the vehicle behavior image into a preset depth completion network, and the depth completion network may be capable of outputting a scene dense depth map characterizing the vehicle behavior in a preset scene. By adopting the method, the scene sparse depth map and the vehicle behavior image are fused to obtain the scene dense depth map, and the vehicle behavior can be judged based on the scene dense depth map, so that the acquired vehicle behavior information is more accurate.
And S33, generating the vehicle pseudo-point cloud picture according to the scene dense depth map based on the camera internal parameters and the camera external parameters.
Specifically, the computer device may convert the scene dense depth map back to a three-dimensional space based on calibrated camera internal parameters and camera external parameters, and generate the vehicle pseudo-point cloud map.
Optionally, the computer device may project the scene dense depth map back to a three-dimensional space based on the camera internal parameter and the camera external parameter, because the scene dense depth map carries depth values, and may generate a three-dimensional vehicle pseudo-point cloud map including vehicle behavior information. The vehicle pseudo-point cloud picture is three-dimensional data fused with vehicle behavior information, and the density of the vehicle pseudo-point cloud picture is higher than that of field depth information, so that the information representing the vehicle behavior such as the position and the direction of a vehicle in a preset scene can be more accurately described.
In this embodiment, the computer device generates a scene sparse depth map with a higher pixel count on a plane of the vehicle behavior image according to the scene depth information based on the camera internal parameter and the camera external parameter, then obtains a scene dense depth map fused with the vehicle behavior information according to the scene sparse depth map and the vehicle behavior image, and finally generates a vehicle pseudo-point cloud map according to the scene dense depth map based on the camera internal parameter and the camera external parameter, so that the two-dimensional vehicle behavior image is converted into a three-dimensional vehicle pseudo-point cloud map with a higher density, and therefore the represented vehicle behavior is more accurate.
Optionally, on the basis of the foregoing embodiments, one possible implementation manner of step S40 may include: inputting the vehicle pseudo-point cloud picture into a preset three-dimensional target detection model to obtain vehicle behavior information; wherein the vehicle behavior information includes at least one of a vehicle position, a vehicle size, and a vehicle direction. Specifically, the computer device may input the vehicle pseudo-point cloud image into the three-dimensional target detection model, and identify at least one of a vehicle position, a vehicle size, and a vehicle direction of the vehicle. It should be noted that the three-dimensional target detection model is a neural network model and is obtained by training a point cloud image labeled with vehicle behavior information. The computer device inputs the vehicle pseudo-point cloud picture into a preset three-dimensional target detection model to obtain vehicle behavior information, so that at least one of the vehicle position, the vehicle size and the vehicle direction of a vehicle in a preset environment can be accurately identified, and the subsequent processing accuracy and precision can be greatly improved, for example, the traffic condition monitoring, or the vehicle tracking, or the acquisition of big data of vehicle running characteristics and the like aiming at the vehicle behavior information can be realized.
Optionally, on the basis of the foregoing embodiments, the method further includes: and obtaining a vehicle behavior detection result according to the vehicle behavior information and a preset judgment rule. The computer device can also be combined with a preset judgment rule to obtain a vehicle behavior detection result according to the vehicle behavior information. For example, when the position of the vehicle is on a double yellow line, the computer device may determine that the vehicle is pressing a line, violating traffic regulations; when the vehicle direction of the vehicle exceeds a first direction and the position is on a road with the driving direction being a second direction, the vehicle can be judged to run in the reverse direction, and the traffic regulation is violated, wherein the first direction is opposite to the second direction. In this embodiment, the computer device obtains the vehicle behavior detection result according to the vehicle behavior information and the preset determination rule, so that the detection of the vehicle behavior is more automatic, and compared with a manual determination mode, the speed is higher and the accuracy is higher.
It should be understood that although the various steps in the flow charts of fig. 2-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a vehicle behavior information acquiring apparatus including:
the acquiring device 100 is used for acquiring scene depth information and a vehicle behavior image of a preset scene; the scene depth information is point cloud information obtained by detecting a static object in a scene by a laser radar, and the vehicle behavior image is an image of a vehicle obtained by shooting by a camera;
the processing module 200 is configured to generate a vehicle pseudo-point cloud map corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and the identification module 300 is configured to identify the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
In an embodiment, the processing module 200 is specifically configured to generate a scene sparse depth map on a plane of the vehicle behavior image according to the scene depth information based on the camera internal parameters and the camera external parameters; obtaining a scene dense depth map according to the scene sparse depth map and the vehicle behavior image; and generating the vehicle pseudo-point cloud picture according to the scene dense depth picture based on the camera internal parameters and the camera external parameters.
In an embodiment, the processing module 200 is specifically configured to input the scene sparse depth map and the vehicle behavior image into a preset depth completion network to obtain a scene dense depth map.
In an embodiment, the processing module 200 is specifically configured to project the scene dense depth map back to a three-dimensional space based on the camera internal parameter and the camera external parameter, and generate the vehicle cloud pseudo-point map.
In one embodiment, the identification module 300 is specifically configured to input the vehicle pseudo-point cloud image into a preset three-dimensional target detection model to obtain vehicle behavior information; wherein the vehicle behavior information includes at least one of a vehicle position, a vehicle size, and a vehicle direction.
In an embodiment, the obtaining module 100 is specifically configured to detect a preset scene by using a laser radar, and obtain the scene depth information of the preset scene.
In one embodiment, the identification module 300 is further configured to obtain a vehicle behavior detection result according to the vehicle behavior information and a preset determination rule.
For specific limitations of the vehicle behavior information acquisition device, reference may be made to the above limitations of the vehicle behavior information acquisition method, which are not described in detail herein. Each module in the vehicle behavior information acquiring apparatus may be entirely or partially implemented by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring scene depth information of a preset scene; the scene depth information is point cloud information obtained by detecting static objects in a scene by a laser radar;
acquiring a vehicle behavior image; the vehicle behavior image is an image of a vehicle shot by a camera;
generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
generating a scene sparse depth map on a plane of the vehicle behavior image according to the scene depth information based on the camera internal parameters and the camera external parameters;
obtaining a scene dense depth map according to the scene sparse depth map and the vehicle behavior image;
and generating the vehicle pseudo-point cloud picture according to the scene dense depth picture based on the camera internal parameters and the camera external parameters.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and inputting the scene sparse depth map and the vehicle behavior image into a preset depth completion network to obtain a scene dense depth map.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and projecting the scene dense depth map back to a three-dimensional space based on the camera internal parameters and the camera external parameters to generate the vehicle pseudo-point cloud map.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
inputting the vehicle pseudo-point cloud picture into a preset three-dimensional target detection model to obtain vehicle behavior information; wherein the vehicle behavior information includes at least one of a vehicle position, a vehicle size, and a vehicle direction.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and detecting a preset scene by adopting a laser radar to obtain the scene depth information of the preset scene.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and obtaining a vehicle behavior detection result according to the vehicle behavior information and a preset judgment rule.
It should be clear that, in the embodiments of the present application, the process of executing the computer program by the processor is consistent with the process of executing the steps in the above method, and specific reference may be made to the description above.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring scene depth information of a preset scene; the scene depth information is point cloud information obtained by detecting static objects in a scene by a laser radar;
acquiring a vehicle behavior image; the vehicle behavior image is an image of a vehicle shot by a camera;
generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
In one embodiment, the computer program when executed by the processor further performs the steps of:
generating a scene sparse depth map on a plane of the vehicle behavior image according to the scene depth information based on the camera internal parameters and the camera external parameters;
obtaining a scene dense depth map according to the scene sparse depth map and the vehicle behavior image;
and generating the vehicle pseudo-point cloud picture according to the scene dense depth picture based on the camera internal parameters and the camera external parameters.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and inputting the scene sparse depth map and the vehicle behavior image into a preset depth completion network to obtain a scene dense depth map.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and projecting the scene dense depth map back to a three-dimensional space based on the camera internal parameters and the camera external parameters to generate the vehicle pseudo-point cloud map.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the vehicle pseudo-point cloud picture into a preset three-dimensional target detection model to obtain vehicle behavior information; wherein the vehicle behavior information includes at least one of a vehicle position, a vehicle size, and a vehicle direction.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and detecting a preset scene by adopting a laser radar to obtain the scene depth information of the preset scene.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and obtaining a vehicle behavior detection result according to the vehicle behavior information and a preset judgment rule.
It should be clear that, in the embodiments of the present application, the process of executing the computer program by the processor is consistent with the process of executing the steps in the above method, and specific reference may be made to the description above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A vehicle behavior information acquisition method characterized by comprising:
acquiring scene depth information of a preset scene; the scene depth information is point cloud information obtained by detecting static objects in a scene by a laser radar;
acquiring a vehicle behavior image; the vehicle behavior image is an image of a vehicle shot by a camera;
generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
2. The method of claim 1, wherein generating a vehicle pseudo-point cloud map corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image comprises:
generating a scene sparse depth map on a plane of the vehicle behavior image according to the scene depth information based on the camera internal parameters and the camera external parameters;
obtaining a scene dense depth map according to the scene sparse depth map and the vehicle behavior image;
and generating the vehicle pseudo-point cloud picture according to the scene dense depth picture based on the camera internal parameters and the camera external parameters.
3. The method of claim 2, wherein obtaining a scene dense depth map from the scene sparse depth map and the vehicle behavior image comprises:
and inputting the scene sparse depth map and the vehicle behavior image into a preset depth completion network to obtain a scene dense depth map.
4. The method of claim 2 or 3, wherein the generating the vehicle pseudo-point cloud map from the scene dense depth map based on the camera internal parameters and the camera external parameters comprises:
and projecting the scene dense depth map back to a three-dimensional space based on the camera internal parameters and the camera external parameters to generate the vehicle pseudo-point cloud map.
5. The method of claim 1, wherein the identifying the vehicle pseudo-point cloud map to obtain vehicle behavior information comprises:
inputting the vehicle pseudo-point cloud picture into a preset three-dimensional target detection model to obtain vehicle behavior information; wherein the vehicle behavior information includes at least one of a vehicle position, a vehicle size, and a vehicle direction.
6. The method of claim 1, wherein the obtaining scene depth information of the preset scene comprises:
and detecting a preset scene by adopting a laser radar to obtain the scene depth information of the preset scene.
7. The method of claim 1, further comprising:
and obtaining a vehicle behavior detection result according to the vehicle behavior information and a preset judgment rule.
8. A vehicle behavior information acquisition apparatus characterized by comprising:
the acquiring device is used for acquiring scene depth information and a vehicle behavior image of a preset scene; the scene depth information is point cloud information obtained by detecting a static object in a scene by a laser radar, and the vehicle behavior image is an image of a vehicle obtained by shooting by a camera;
the processing module is used for generating a vehicle pseudo-point cloud picture corresponding to the vehicle behavior image based on camera internal parameters and camera external parameters of the camera according to the scene depth information and the vehicle behavior image; the vehicle pseudo-point cloud picture is a point cloud image of a three-dimensional space including vehicle behavior image features;
and the identification module is used for identifying the vehicle pseudo-point cloud picture to obtain vehicle behavior information.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911403414.8A 2019-12-30 2019-12-30 Vehicle behavior information acquisition method and device, computer equipment and storage medium Pending CN111161202A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403414.8A CN111161202A (en) 2019-12-30 2019-12-30 Vehicle behavior information acquisition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403414.8A CN111161202A (en) 2019-12-30 2019-12-30 Vehicle behavior information acquisition method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111161202A true CN111161202A (en) 2020-05-15

Family

ID=70559794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403414.8A Pending CN111161202A (en) 2019-12-30 2019-12-30 Vehicle behavior information acquisition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111161202A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739005A (en) * 2020-06-22 2020-10-02 北京百度网讯科技有限公司 Image detection method, image detection device, electronic equipment and storage medium
CN112184589A (en) * 2020-09-30 2021-01-05 清华大学 Point cloud intensity completion method and system based on semantic segmentation
CN113284173A (en) * 2021-04-20 2021-08-20 中国矿业大学 End-to-end scene flow and pose joint learning method based on pseudo laser radar
CN113362444A (en) * 2021-05-21 2021-09-07 北京百度网讯科技有限公司 Point cloud data generation method and device, electronic equipment and storage medium
CN113486887A (en) * 2021-06-30 2021-10-08 杭州飞步科技有限公司 Target detection method and device in three-dimensional scene
CN114310886A (en) * 2021-12-28 2022-04-12 深圳中智永浩机器人有限公司 Human leg recognition method and device, computer equipment and storage medium
CN115482672A (en) * 2022-08-11 2022-12-16 西安天和防务技术股份有限公司 Vehicle reverse running detection method and device, terminal equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined
JP2015142364A (en) * 2014-01-30 2015-08-03 キヤノン株式会社 Image processing device, imaging apparatus and image processing method
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
US20180357314A1 (en) * 2017-06-13 2018-12-13 TuSimple Time synchronization and data acquisition system for ground truth static scene sparse flow generation
EP3438872A1 (en) * 2017-08-04 2019-02-06 Bayerische Motoren Werke Aktiengesellschaft Method, apparatus and computer program for a vehicle
CN110344621A (en) * 2019-06-13 2019-10-18 武汉大学 A kind of wheel points cloud detection method of optic towards intelligent garage
CN110501021A (en) * 2019-08-27 2019-11-26 中国人民解放军国防科技大学 Odometer estimation method and system based on camera and laser radar fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined
JP2015142364A (en) * 2014-01-30 2015-08-03 キヤノン株式会社 Image processing device, imaging apparatus and image processing method
US20180357314A1 (en) * 2017-06-13 2018-12-13 TuSimple Time synchronization and data acquisition system for ground truth static scene sparse flow generation
EP3438872A1 (en) * 2017-08-04 2019-02-06 Bayerische Motoren Werke Aktiengesellschaft Method, apparatus and computer program for a vehicle
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN110344621A (en) * 2019-06-13 2019-10-18 武汉大学 A kind of wheel points cloud detection method of optic towards intelligent garage
CN110501021A (en) * 2019-08-27 2019-11-26 中国人民解放军国防科技大学 Odometer estimation method and system based on camera and laser radar fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAOJIE LIU等: ""PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation"", pages 1 - 7 *
江文婷;龚小谨;刘济林;: "基于增量计算的大规模场景致密语义地图构建", 浙江大学学报(工学版), no. 02, 15 February 2016 (2016-02-15) *
王东敏;彭永胜;李永乐;: "视觉与激光点云融合的深度图像获取方法", 军事交通学院学报, no. 10, 25 October 2017 (2017-10-25) *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739005B (en) * 2020-06-22 2023-08-08 北京百度网讯科技有限公司 Image detection method, device, electronic equipment and storage medium
CN111739005A (en) * 2020-06-22 2020-10-02 北京百度网讯科技有限公司 Image detection method, image detection device, electronic equipment and storage medium
US11315271B2 (en) 2020-09-30 2022-04-26 Tsinghua University Point cloud intensity completion method and system based on semantic segmentation
CN112184589A (en) * 2020-09-30 2021-01-05 清华大学 Point cloud intensity completion method and system based on semantic segmentation
CN112184589B (en) * 2020-09-30 2021-10-08 清华大学 Point cloud intensity completion method and system based on semantic segmentation
CN113284173A (en) * 2021-04-20 2021-08-20 中国矿业大学 End-to-end scene flow and pose joint learning method based on pseudo laser radar
CN113284173B (en) * 2021-04-20 2023-12-19 中国矿业大学 End-to-end scene flow and pose joint learning method based on false laser radar
CN113362444B (en) * 2021-05-21 2023-06-16 北京百度网讯科技有限公司 Point cloud data generation method and device, electronic equipment and storage medium
CN113362444A (en) * 2021-05-21 2021-09-07 北京百度网讯科技有限公司 Point cloud data generation method and device, electronic equipment and storage medium
CN113486887A (en) * 2021-06-30 2021-10-08 杭州飞步科技有限公司 Target detection method and device in three-dimensional scene
CN113486887B (en) * 2021-06-30 2023-12-22 杭州飞步科技有限公司 Target detection method and device in three-dimensional scene
CN114310886A (en) * 2021-12-28 2022-04-12 深圳中智永浩机器人有限公司 Human leg recognition method and device, computer equipment and storage medium
CN115482672A (en) * 2022-08-11 2022-12-16 西安天和防务技术股份有限公司 Vehicle reverse running detection method and device, terminal equipment and storage medium
CN115482672B (en) * 2022-08-11 2023-12-19 西安天和防务技术股份有限公司 Method, device, terminal equipment and storage medium for detecting vehicle reverse running

Similar Documents

Publication Publication Date Title
CN111161202A (en) Vehicle behavior information acquisition method and device, computer equipment and storage medium
CN110458112B (en) Vehicle detection method and device, computer equipment and readable storage medium
CN111160172B (en) Parking space detection method, device, computer equipment and storage medium
CN112232293B (en) Image processing model training method, image processing method and related equipment
CN111797650B (en) Obstacle identification method, obstacle identification device, computer equipment and storage medium
CN110459064B (en) Vehicle illegal behavior detection method and device and computer equipment
US11087138B2 (en) Vehicle damage assessment method, apparatus, and device
CN110634153A (en) Target tracking template updating method and device, computer equipment and storage medium
CN110490902B (en) Target tracking method and device applied to smart city and computer equipment
CN110751149B (en) Target object labeling method, device, computer equipment and storage medium
WO2023016271A1 (en) Attitude determining method, electronic device, and readable storage medium
CN108304243B (en) Interface generation method and device, computer equipment and storage medium
CN111915483A (en) Image splicing method and device, computer equipment and storage medium
CN113673530A (en) Remote sensing image semantic segmentation method and device, computer equipment and storage medium
CN111832561B (en) Character sequence recognition method, device, equipment and medium based on computer vision
WO2023016182A1 (en) Pose determination method and apparatus, electronic device, and readable storage medium
CN111144404B (en) Method, apparatus, system, computer device and storage medium for detecting legacy object
CN114267041B (en) Method and device for identifying object in scene
CN115035580A (en) Figure digital twinning construction method and system
CN110516559B (en) Target tracking method and device suitable for accurate monitoring and computer equipment
CN111340837A (en) Image processing method, device, equipment and storage medium
CN115511870A (en) Object detection method and device, electronic equipment and storage medium
CN112444251A (en) Vehicle driving position determining method and device, storage medium and computer equipment
CN111721283A (en) Precision detection method and device of positioning algorithm, computer equipment and storage medium
CN112241705A (en) Target detection model training method and target detection method based on classification regression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination