CN111695480A - Real-time target detection and 3D positioning method based on single-frame image - Google Patents

Real-time target detection and 3D positioning method based on single-frame image Download PDF

Info

Publication number
CN111695480A
CN111695480A CN202010500784.XA CN202010500784A CN111695480A CN 111695480 A CN111695480 A CN 111695480A CN 202010500784 A CN202010500784 A CN 202010500784A CN 111695480 A CN111695480 A CN 111695480A
Authority
CN
China
Prior art keywords
target
dimensional
loss function
box
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010500784.XA
Other languages
Chinese (zh)
Other versions
CN111695480B (en
Inventor
周喜川
龙春桥
彭逸聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202010500784.XA priority Critical patent/CN111695480B/en
Publication of CN111695480A publication Critical patent/CN111695480A/en
Application granted granted Critical
Publication of CN111695480B publication Critical patent/CN111695480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a real-time target detection and 3D positioning method based on a single-frame image, and belongs to the technical field of visual processing. The method comprises the following steps: s1: inputting a two-dimensional RGB image; s2: extracting the features of the two-dimensional RGB image, and respectively extracting the features of a deep layer network and the features of a shallow layer network; s3: a two-dimensional target identification module is carried out and applied to a subsequent module; s4: estimating the vertex of the three-dimensional frame, the example-level depth information and the center point of the three-dimensional frame respectively; s5: adding a horizontal locality preserving regularization term into the three-dimensional frame center point prediction, thereby constraining and optimizing the prediction of the three-dimensional frame center point; s6: and combining the predictions of all the modules to output a two-dimensional RGB image with a 3D-Box mark. The method improves the speed of model training convergence and the accuracy of 3D target identification and positioning, and simultaneously meets the requirement of the ADAS scheme on accuracy with low hardware cost.

Description

Real-time target detection and 3D positioning method based on single-frame image
Technical Field
The invention belongs to the technical field of visual processing, and relates to a real-time target detection and 3D positioning method based on a single-frame image.
Background
The 3D target detection and positioning based on machine vision mainly comprises the steps of obtaining image information or point cloud information through a sensor, then extracting characteristic information of a target in the image or the point cloud through a convolutional neural network, and returning three-dimensional information of the target by processing the characteristic information, namely a central point coordinate of the target, the length, the width and the height of a three-dimensional frame and a phase relation between the three-dimensional frame and a machine position. And finally, representing the three-dimensional information of the target in the form of a three-dimensional frame in the image.
In recent years, researchers and engineers have been able to develop accurate and cost effective assisted driving systems (ADAS) due to the rapid development of deep learning methods. At present, the sensors can be divided into a laser radar-based 3D target detection and a camera-based 3D target detection. The 3D target detection based on the camera can be mainly divided into 3D target detection based on the multi-frame image parallax principle and 3D target detection based on a single-frame image.
Laser radar-based 3D target detection has evolved at a rapid pace since the first lidar-based three-dimensional identification paper published in 2016. The application of deep learning techniques to point cloud based 3D object detection was first proposed by Charles professor 2017. Later, famous enterprises and colleges such as google, Uber, hong kong chinese university, shanghai transportation university, etc. have conducted different studies on the lidar-based 3D target detection. Due to the high-precision characteristic of point cloud data, the methods obtain good 3D target detection accuracy. However, this method is not very useful for assisting driving because of the high lidar content.
In recent years, 3D target detection methods based on the principle of parallax of multi-frame images have also been developed at a rapid pace. There are no novel methods, and the prediction of disparity estimation is optimized by integrating semantics in professor danxidong of the university of qinghua and the team thereof, and the overfitting problem in disparity estimation is solved by regarding disparity estimation as a regression problem in professor baixiao of the university of beijing and the team thereof. Although the multi-frame image parallax technology is becoming mature, the cost advantage is not taken in the ADAS application due to the large sensor overhead, the high calculation complexity, the high hardware cost and the like.
In 2018, researchers continue to propose an algorithm related to single-frame image 3D target positioning, and Roddick in 2018 proposes an OFT-Net network to map the features of an image to an orthogonal 3D space for 3D target detection. Then, in 2019, researchers continuously improve and optimize the 3D target detection. However, the accuracy of single frame image 3D object detection is still lower than expected for driving assistance so far.
As the technology of driving assistance based on computer vision has been greatly advanced, the demand for ADAS having low power consumption and high energy efficiency in the field of driving assistance is continuously increasing. The existing 3D target detection algorithm based on the laser radar and the multi-frame image parallax principle cannot meet the requirements on power consumption and cost; although single frame based 3D object detection has great advantages in terms of power consumption and cost, existing algorithms are far from accurate enough and they focus more on depth estimation. But for 3D object detection, horizontal information prediction is equally important. Existing algorithms do not adequately account for the estimation of level information.
Disclosure of Invention
In view of the above, the present invention provides a real-time target detection and 3D positioning method based on a single frame image. The recognition precision of the whole 3D-Box is increased by predicting the constraint level information, and the spatial geometric correlation of the adjacent target is introduced into the deep neural network training process in a regularization term mode, so that the model training convergence speed and the 3D target recognition and positioning accuracy are improved. And simultaneously, the requirement of the ADAS scheme on accuracy with low hardware cost is met.
In order to achieve the purpose, the invention provides the following technical scheme:
a real-time target detection and 3D positioning method based on a single frame image comprises the following steps:
s1: inputting a two-dimensional RGB image;
s2: extracting the features of the two-dimensional RGB image, and respectively extracting the features of a deep layer network and the features of a shallow layer network;
s3: a two-dimensional target identification module is carried out and applied to a subsequent module;
s4: estimating the vertex of the three-dimensional frame, the example-level depth information and the center point of the three-dimensional frame respectively;
s5: adding a horizontal locality preserving regularization term into the three-dimensional frame center point prediction, thereby constraining and optimizing the prediction of the three-dimensional frame center point;
s6: and combining the predictions of all the modules to output a two-dimensional RGB image with a 3D-Box mark.
Optionally, in step S5, the method for increasing the recognition accuracy of the overall 3D-Box by constraining the prediction of the horizontal information by using a regularization algorithm for horizontal geometric locality preservation includes the following steps:
s51: designing a horizontal geometric locality preserving hypothesis as a regularization term of a 3D-Box center point loss function, and assuming that M target samples exist in an image; matrix S ═ SijDefining M × M adjacent matrix, also called weight matrix, and expression as formula (1):
Figure BDA0002524611150000021
wherein s isijRepresents a horizontal adjacency metric between the near depth target and the target, i, j ═ { 1., M } represents the ith, j-th target,
Figure BDA0002524611150000022
and
Figure BDA0002524611150000023
is the horizontal offset of the target and the target on the two-dimensional image, is a custom parameter,
Figure BDA0002524611150000024
real depth information of a 3D-box central point of the target;
s52: applying the similarity relation defined by the formula (1) to the neural network full-link prediction of the 3D-Box center pointIn the joint layer; targeting the feature information y at this layeriExpressed as: y isi=Wxi+ b wherein xiRepresenting the input of the fully connected layer, W being the connection weight, b being the deviation vector; assuming that the training targets and the targets are adjacent in the 3D depth and 2D horizontal directions, the entire network will try to estimate the optimal connection weight W, and the 3D horizontal offsets of the targets are also similar; defining a regularization term R (W) as the characteristic difference of adjacent target pairs, wherein the expression is shown as formula (2):
Figure BDA0002524611150000031
β is a self-defined parameter, and if the i, j sample pair is more adjacent, the adjacent measure s is largerijThe larger, s is in minimizing the entire loss functionijWx can be more quickly reducediAnd WxjThereby maintaining the adjacency of the target object pair in two-dimensional space to three-dimensional space; adding R (W) to the overall loss function of the single-frame image three-dimensional target detection network, and finally expressing the overall loss function L of the network as follows:
L=L2d+Ldepth+L3d+R(W)
the correlation loss function adopts L1 or L2 loss function error definition;
wherein L is2dRepresents: in the 2D target detection loss function, the sum of the loss function of the target confidence coefficient and the loss function of the 2D-Box is obtained;
Ldepthrepresents: the depth information loss function respectively calculates the depth loss functions of the deep neural network and the shallow neural network through the L1 loss function, and links the two loss functions through the weight r to form a final depth information loss function;
L3drepresents: the 3D loss function is divided into 3D-Box and its center point loss functions, both represented by L1 loss functions.
Optionally, in the formula (1), when the depth distances of the target and the object are similar and the target are more adjacent, the weight sijWill be larger; if the target pairIf the depth distance is larger or the horizontal distance difference of the target pair is larger, the weight sijThe smaller will be.
Optionally, the loss function of the target confidence is a combination of a softmax function and a cross entropy; the loss function of the 2D-Box is calculated by the L1 distance loss function.
Optionally, the L1 loss function is a function of the target value YiAnd an estimated value f (x)i) Minimizes the sum of absolute differences S of:
Figure BDA0002524611150000032
the L2 loss function is a function of the target value Yi' with estimate f (x)i) 'the sum of the squares of the differences S' is minimized:
Figure BDA0002524611150000033
alternatively, the 3D-Box will be represented by the three-dimensional center point of the object, and the 8 vertex coordinate points of its 3D-Box bounding Box.
Optionally, in S5, adding a regularization term to the three-dimensional target neural network specifically includes the following steps:
s511: selecting a proper neural network model and loading the weight of the model;
s512: adding a proposed horizontal geometric locality maintenance regularization term into a loss function of a 3D-Box estimation module, and setting parameters in an R (W) function;
s513: updating the weight by using a random gradient descent (SGD) algorithm with impulse to make the model converge;
s514: and loading the weight of the trained neural network into the neural network or an edge calculation end, and finishing the whole network deployment.
Optionally, in S5, the applying the horizontal geometric locality preserving regularization term in an embedded system specifically includes the following steps:
s5111: a single camera is used for shooting a single frame image;
s5112: transmitting the single-frame image to embedded equipment for analysis and calculation;
s5113: identifying and three-dimensionally positioning a plurality of targets in the image;
s5114: and finally, transmitting the identified and positioned image out.
Optionally, the embedded system is jetsonogaxxavier.
Optionally, the example-level depth information is the depth z of the three-dimensional frame center point predicted by the example-level depth prediction modulegThe data obtained were: after the feature map is divided into grids, the depth prediction module only predicts that the distance between the feature map and the example is less than a distance threshold value sigmascopeThe target depth of the grid of (a).
The invention has the beneficial effects that:
1. first, there is a higher accuracy in 3D single frame object detection than existing algorithms. At IoU ═ 0.7, other 3D single frame object detection algorithms in this category of cars were up to 20.27% in Easy mode (object width greater than 40 pixels and object not occluded), reaching 22.73% under the same conditions.
2. Secondly, the manifold learning-based horizontal geometric locality preserving regularization method is in accordance with the geometric principle, so that the method can be applied to other similar methods, and the accuracy of the corresponding method is improved.
3. Finally, thanks to the simplicity of the network, the method achieves 27.85FPS on the server, achieving the real-time requirement, and at the edge, the method can achieve 7.90FPS while maintaining the same accuracy.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a representation of 3D object localization and 3D-Box;
FIG. 2 is a case of 3D object localization;
FIG. 3 is a flow chart of a single frame three dimensional target detection algorithm;
FIG. 4 is a flow chart of a single frame three dimensional target detection network;
FIG. 5 is a block diagram of a single-frame three-dimensional target detection embedded system.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Please refer to fig. 1 to 5, which illustrate a real-time target detection and 3D positioning method based on a single frame image.
As shown in fig. 1, a single-camera, single-frame RGB image is input, all cars in the image are predicted, and three-dimensional position information of each car is returned in the form of a 3D-Box. Where the 3D-Box of an arbitrary object will be represented by the three-dimensional center point of the object, and the 8 vertex coordinate points of its 3D-Box bounding Box. To replace lidar, the key to image-based 3D target recognition is the center point prediction for the 3D-Box. For the height of the target, since the height of the vehicle in driving does not vary much in real life, the height information is not a key factor affecting the accuracy. For depth information, the existing algorithm also makes good progress, and reliable depth information can be obtained. The prediction of the horizontal information of the center point of the 3D-Box is crucial for the prediction of the entire 3D-Box.
As shown in fig. 2, it can be found that a and C are closer in distance on the two-dimensional image than a and B, so it is assumed by this condition that the horizontal distances of a and C should also be closer in three dimensions. In this regard, the accuracy of the overall 3D-Box recognition is increased by constraining the prediction of horizontal information using a regularization algorithm that preserves horizontal geometric locality.
The regularization algorithm principle of horizontal geometric locality preservation is explained as follows:
the horizontal geometric locality preserving assumption is designed as a regularization term of a 3D-Box center point loss function, and the main implementation method can be expressed as follows. Assume that there are M target samples within the image. Matrix S ═ SijThe M × M neighbor matrix is defined, which can also be referred to as a weight matrix.
Figure BDA0002524611150000061
Wherein s isijRepresents a horizontal adjacency metric between the near depth target and the target, i, j ═ { 1., M } represents the ith, j-th target,
Figure BDA0002524611150000062
and
Figure BDA0002524611150000063
is the horizontal offset of the target and the target on the two-dimensional image, is a custom parameter,
Figure BDA0002524611150000064
real depth information of the center point of the 3D-box of the object. As can be seen from equation (1), the weight s is given when the depth distances of the object and the target are similar and the object and the target are more adjacentijWill be larger; the weight s is given if the depth distance of the target pair is large or the horizontal distance difference of the target pair is largeijThe smaller will be.
The similarity relation defined by the formula (1) is applied to the neural network fully-connected layer predicted by the 3D-Box center point. Targeting the feature information y at this layeriCan be expressed as: y isi=Wxi+ b wherein xiRepresents the input of the fully connected layer, W is the connection weight, and b is the offset vector. Assuming that the training targets and targets are adjacent in the 3D depth and 2D horizontal directions, the entire network will attempt to estimate the best connection weights W, and the 3D horizontal offsets of the targets are also similar. Accordingly, the regularization term r (w) is defined as the feature difference of the adjacent target pair, and the expression is as formula 2.
Figure BDA0002524611150000065
β is a custom parameter, if the i, j sample pairs are more adjacent, then the adjacent measure sijThe larger, s is in minimizing the entire loss functionijWx can be more quickly reducediAnd WxjThereby maintaining the adjacency of the target object pair in two-dimensional space to three-dimensional space. Finally, adding R (W) into the overall loss of the single-frame image three-dimensional target detection networkOn the loss function, the overall loss function L of the final network can be expressed as:
L=L2d+Ldepth+L3d+R(W)
the correlation loss function can be defined using common L1 or L2 errors.
The L1 loss function is a function of the target value YiAnd an estimated value f (x)i) Minimizes the sum of absolute differences S of:
Figure BDA0002524611150000071
the L2 loss function is a function of the target value Yi' with estimate f (x)i) 'the sum of the squares of the differences S' is minimized:
Figure BDA0002524611150000072
L2d: in the 2D target detection loss function, the sum of the loss function of the target confidence coefficient and the loss function of the 2D-Box is mainly used. Wherein the loss function of the target confidence degree is mainly combined with the cross entropy through the softmax function. The loss function of the 2D-Box is mainly calculated by the L1 distance loss function.
Ldepth: the depth information loss function calculates the depth loss functions of the deep neural network and the shallow neural network respectively through the L1 loss function, and links the two loss functions through the weight r, thereby forming the final depth information loss function.
L3d: the 3D loss function is divided into a 3D-Box and its center point loss function, both represented by the L1 loss function.
Compared with the existing single-frame 3D target detection algorithm, the method improves the accuracy of the 3D-box in the central horizontal direction on the original basis, so that the accuracy of 3D target detection is improved, and the real-time requirement in an ADAS application scene is kept. Table 1 shows the experimental results of different methods on the KITTI dataset.
TABLE 1 comparison of results of 3D target detection experiments by different methods
Figure BDA0002524611150000073
1. First, there is a higher accuracy in 3D single frame object detection than existing algorithms. At IoU ═ 0.7, other 3D single frame object detection algorithms in this category of cars were up to 20.27% in Easy mode (object width greater than 40 pixels and object not occluded), reaching 22.73% under the same conditions.
2. Secondly, the manifold learning-based horizontal geometric locality preserving regularization method is in accordance with the geometric principle, so that the method can be applied to other similar methods, and the accuracy of the corresponding method is improved.
3. Finally, thanks to the simplicity of the network, the method achieves 27.85FPS on the server, achieving the real-time requirement, and at the edge, the method can achieve 7.90FPS while maintaining the same accuracy.
Fig. 3 is a flowchart of a single-frame three-dimensional target detection method. Briefly, the following steps may be represented.
1. A two-dimensional RGB image is input.
2. And extracting the features of the two-dimensional RGB image, and respectively extracting the features of the deep layer network and the features of the shallow layer network.
3. And carrying out a two-dimensional target identification module and applying the two-dimensional target identification module to a following module.
4. And estimating the vertex of the three-dimensional frame, the example-level depth information and the center point of the three-dimensional frame respectively.
5. Adding a horizontal locality preserving regularization term to the three-dimensional box center point prediction to constrain and optimize the prediction of the three-dimensional box center point.
6. And finally, combining the predictions of all modules, and outputting a two-dimensional RGB image with a 3D-Box mark.
The example-level depth information is a depth z of a three-dimensional box center point predicted by an example-level depth prediction modulegThe data obtained were: after the feature map is divided into grids, the depth prediction module only predicts that the distance from the example is less than the distanceThreshold value sigmascopeThe target depth of the grid of (a).
Furthermore, this regularization term can be added to most three-dimensional target neural networks, specifically, the following steps are required, and refer to the flowchart of fig. 4.
Step 1: an appropriate neural network model is selected and the weights of the model are loaded.
Step 2: and adding a proposed horizontal geometric locality maintenance regularization term into a loss function of the 3D-Box estimation module, and setting parameters in an R (W) function.
And step 3: and updating the weight by using an SGD algorithm with impulse to make the model converge.
And 4, step 4: and loading the weight of the trained neural network into the neural network or an edge calculation end. By this point, the entire network deployment is complete.
As shown in fig. 5, the horizontal geometric locality preserving regularization method can be applied to embedded systems. The single camera shoots a single-frame image, the single-frame image is transmitted to the embedded equipment for analysis and calculation, a plurality of targets in the image are identified and three-dimensionally positioned, and finally the identified and positioned image is transmitted out. In this system, the embedded system used is jetsonogaxxavier, which is an embedded edge computing device introduced by NVIDIA corporation in 2018. The detection rate of the method on XAVIER reaches 7.90 FPS. The regularization method for maintaining the horizontal geometric locality can be applied to a single-frame image, and can be still used for improving the identification precision when a plurality of-frame images and radar point cloud data are used.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (10)

1. The real-time target detection and 3D positioning method based on the single frame image is characterized in that: the method comprises the following steps:
s1: inputting a two-dimensional RGB image;
s2: extracting the features of the two-dimensional RGB image, and respectively extracting the features of a deep layer network and the features of a shallow layer network;
s3: a two-dimensional target identification module is carried out and applied to a subsequent module;
s4: estimating the vertex of the three-dimensional frame, the example-level depth information and the center point of the three-dimensional frame respectively;
s5: adding a horizontal locality preserving regularization term into the three-dimensional frame center point prediction, thereby constraining and optimizing the prediction of the three-dimensional frame center point;
s6: and combining the predictions of all the modules to output a two-dimensional RGB image with a 3D-Box mark.
2. The real-time target detection and 3D positioning method based on single frame image as claimed in claim 1, wherein: in step S5, the method for increasing the recognition accuracy of the whole 3D-Box by constraining the prediction of the horizontal information by using the regularization algorithm for horizontal geometric locality preservation includes the following steps:
s51: designing a horizontal geometric locality preserving hypothesis as a regularization term of a 3D-Box center point loss function, and assuming that M target samples exist in an image; matrix S ═ SijDefining M × M adjacent matrix, also called weight matrix, and expression as formula (1):
Figure FDA0002524611140000011
wherein s isijRepresents a horizontal adjacency metric between the near depth target and the target, i, j ═ { 1., M } represents the ith, j-th target,
Figure FDA0002524611140000012
and
Figure FDA0002524611140000013
is the horizontal offset of the target and the target on the two-dimensional image, is a custom parameter,
Figure FDA0002524611140000014
real depth information of a 3D-box central point of the target;
s52: applying the similarity relation defined by the formula (1) to a neural network fully-connected layer predicted by a 3D-Box central point; targeting the feature information y at this layeriExpressed as: y isi=Wxi+ b wherein xiRepresenting the input of the fully connected layer, W being the connection weight, b being the deviation vector; assuming that the training targets and the targets are adjacent in the 3D depth and 2D horizontal directions, the entire network will try to estimate the optimal connection weight W, and the 3D horizontal offsets of the targets are also similar; defining a regularization term R (W) as the characteristic difference of adjacent target pairs, wherein the expression is shown as formula (2):
Figure FDA0002524611140000015
β is a self-defined parameter, and if the i, j sample pair is more adjacent, the adjacent measure s is largerijThe larger, s is in minimizing the entire loss functionijWx can be more quickly reducediAnd WxjThereby maintaining the adjacency of the target object pair in two-dimensional space to three-dimensional space; adding R (W) to the overall loss function of the single-frame image three-dimensional target detection network, and finally expressing the overall loss function L of the network as follows:
L=L2d+Ldepth+L3d+R(W)
the correlation loss function adopts L1 or L2 loss function error definition;
wherein L is2dRepresents: in the 2D target detection loss function, the sum of the loss function of the target confidence coefficient and the loss function of the 2D-Box is obtained;
Ldepthrepresents: the depth loss function of the depth information respectively calculates the depth loss functions of the deep neural network and the shallow neural network through the L1 loss function, and links the two loss functions through the weight rCounting to form a final depth information loss function;
L3drepresents: the 3D loss function is divided into 3D-Box and its center point loss functions, both represented by L1 loss functions.
3. The real-time target detection and 3D positioning method based on single frame image as claimed in claim 2, wherein: in the formula (1), when the depth distances of the target and the object are similar and the target and the object are more adjacent, the weight sijWill be larger; the weight s is given if the depth distance of the target pair is large or the horizontal distance difference of the target pair is largeijThe smaller will be.
4. The real-time target detection and 3D positioning method based on single frame image as claimed in claim 2, wherein: the loss function of the target confidence coefficient is the combination of a softmax function and cross entropy; the loss function of the 2D-Box is calculated by the L1 distance loss function.
5. The real-time target detection and 3D positioning method based on single frame image as claimed in claim 2, wherein: the L1 loss function is a function of the target value YiAnd an estimated value f (x)i) Minimizes the sum of absolute differences S of:
Figure FDA0002524611140000021
the L2 loss function is a function of the target value Yi' with estimate f (x)i) 'the sum of the squares of the differences S' is minimized:
Figure FDA0002524611140000022
6. the real-time target detection and 3D positioning method based on single frame image as claimed in claim 1, wherein: the 3D-Box will be represented by the three-dimensional center point of the object, and the 8 vertex coordinate points of its 3D-Box bounding Box.
7. The real-time target detection and 3D positioning method based on single frame image as claimed in claim 1, wherein: in S5, adding a regularization term to the three-dimensional target neural network, specifically including the steps of:
s511: selecting a proper neural network model and loading the weight of the model;
s512: adding a proposed horizontal geometric locality maintenance regularization term into a loss function of a 3D-Box estimation module, and setting parameters in an R (W) function;
s513: updating the weight by using a random gradient descent (SGD) algorithm with impulse to make the model converge;
s514: and loading the weight of the trained neural network into the neural network or an edge calculation end, and finishing the whole network deployment.
8. The real-time target detection and 3D positioning method based on single frame image as claimed in claim 7, wherein: in S5, the applying the horizontal geometric locality preserving regularization term in an embedded system specifically includes the following steps:
s5111: a single camera is used for shooting a single frame image;
s5112: transmitting the single-frame image to embedded equipment for analysis and calculation;
s5113: identifying and three-dimensionally positioning a plurality of targets in the image;
s5114: and finally, transmitting the identified and positioned image out.
9. The real-time target detection and 3D positioning method based on single frame image as claimed in claim 8, wherein: the embedded system is JETSONAGX XAVIER.
10. The real-time target detection and 3D positioning method based on single frame image as claimed in claim 1, wherein: the instance-level depth information is in a three-dimensional box predicted by a depth prediction module at the instance levelDepth z of the center pointgThe data obtained were: after the feature map is divided into grids, the depth prediction module only predicts that the distance between the feature map and the example is less than a distance threshold value sigmascopeThe target depth of the grid of (a).
CN202010500784.XA 2020-06-04 2020-06-04 Real-time target detection and 3D positioning method based on single frame image Active CN111695480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010500784.XA CN111695480B (en) 2020-06-04 2020-06-04 Real-time target detection and 3D positioning method based on single frame image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010500784.XA CN111695480B (en) 2020-06-04 2020-06-04 Real-time target detection and 3D positioning method based on single frame image

Publications (2)

Publication Number Publication Date
CN111695480A true CN111695480A (en) 2020-09-22
CN111695480B CN111695480B (en) 2023-04-28

Family

ID=72478972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010500784.XA Active CN111695480B (en) 2020-06-04 2020-06-04 Real-time target detection and 3D positioning method based on single frame image

Country Status (1)

Country Link
CN (1) CN111695480B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819880A (en) * 2021-01-07 2021-05-18 北京百度网讯科技有限公司 Three-dimensional object detection method, device, equipment and storage medium
CN114061761A (en) * 2021-11-17 2022-02-18 重庆大学 Remote target temperature accurate measurement method based on monocular infrared stereoscopic vision correction

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270484A1 (en) * 2013-03-14 2014-09-18 Nec Laboratories America, Inc. Moving Object Localization in 3D Using a Single Camera
WO2015065558A1 (en) * 2013-10-30 2015-05-07 Nec Laboratories America, Inc. Monocular 3d localization for autonomous driving using adaptive ground plane estimation
CN108898628A (en) * 2018-06-21 2018-11-27 北京纵目安驰智能科技有限公司 Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular
US20190147245A1 (en) * 2017-11-14 2019-05-16 Nuro, Inc. Three-dimensional object detection for autonomous robotic systems using image proposals
CN110070025A (en) * 2019-04-17 2019-07-30 上海交通大学 Objective detection system and method based on monocular image
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
CN111046767A (en) * 2019-12-04 2020-04-21 武汉大学 3D target detection method based on monocular image
CN111126269A (en) * 2019-12-24 2020-05-08 京东数字科技控股有限公司 Three-dimensional target detection method, device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270484A1 (en) * 2013-03-14 2014-09-18 Nec Laboratories America, Inc. Moving Object Localization in 3D Using a Single Camera
WO2015065558A1 (en) * 2013-10-30 2015-05-07 Nec Laboratories America, Inc. Monocular 3d localization for autonomous driving using adaptive ground plane estimation
US20190147245A1 (en) * 2017-11-14 2019-05-16 Nuro, Inc. Three-dimensional object detection for autonomous robotic systems using image proposals
CN108898628A (en) * 2018-06-21 2018-11-27 北京纵目安驰智能科技有限公司 Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular
CN110070025A (en) * 2019-04-17 2019-07-30 上海交通大学 Objective detection system and method based on monocular image
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
CN111046767A (en) * 2019-12-04 2020-04-21 武汉大学 3D target detection method based on monocular image
CN111126269A (en) * 2019-12-24 2020-05-08 京东数字科技控股有限公司 Three-dimensional target detection method, device and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZENGYI QIN等: "MonoGRNet: A Geometric Reasoning Network for Monocular 3D Object Localization", 《ARXIV》 *
ZHOU, XICHUAN等: "MoNet3D: Towards Accurate Monocular 3D Object Localization in Real Time", 《ARXIV》 *
彭逸聪: "基于单帧图像的三维目标检测与定位深度学习方法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
范旭明: "基于最小生成集和位姿估计的深度图像目标识别", 《信息通信》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819880A (en) * 2021-01-07 2021-05-18 北京百度网讯科技有限公司 Three-dimensional object detection method, device, equipment and storage medium
CN114061761A (en) * 2021-11-17 2022-02-18 重庆大学 Remote target temperature accurate measurement method based on monocular infrared stereoscopic vision correction
CN114061761B (en) * 2021-11-17 2023-12-08 重庆大学 Remote target temperature accurate measurement method based on monocular infrared stereoscopic vision correction

Also Published As

Publication number Publication date
CN111695480B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
Fan et al. Road surface 3D reconstruction based on dense subpixel disparity map estimation
CN110335337B (en) Method for generating visual odometer of antagonistic network based on end-to-end semi-supervision
CN110689008A (en) Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
US11948368B2 (en) Real-time target detection and 3d localization method based on single frame image
CN109934848A (en) A method of the moving object precise positioning based on deep learning
CN111998862B (en) BNN-based dense binocular SLAM method
CN111368755A (en) Vision-based pedestrian autonomous following method for quadruped robot
CN111695480A (en) Real-time target detection and 3D positioning method based on single-frame image
CN113468950A (en) Multi-target tracking method based on deep learning in unmanned driving scene
CN111860651B (en) Monocular vision-based semi-dense map construction method for mobile robot
CN113095154A (en) Three-dimensional target detection system and method based on millimeter wave radar and monocular camera
CN108217045A (en) A kind of intelligent robot for undercarriage on data center's physical equipment
US20110175998A1 (en) Motion calculation device and motion calculation method
CN115331029A (en) Heterogeneous image matching method based on cross-mode conversion network and optimal transmission theory
CN110992424B (en) Positioning method and system based on binocular vision
CN115880333A (en) Three-dimensional single-target tracking method based on multi-mode information fusion
CN116402876A (en) Binocular depth estimation method, binocular depth estimation device, embedded equipment and readable storage medium
Yao et al. DepthSSC: Depth-Spatial Alignment and Dynamic Voxel Resolution for Monocular 3D Semantic Scene Completion
CN113888629A (en) RGBD camera-based rapid object three-dimensional pose estimation method
CN113536959A (en) Dynamic obstacle detection method based on stereoscopic vision
CN113420590A (en) Robot positioning method, device, equipment and medium in weak texture environment
CN103927782A (en) Method for depth image surface fitting
Huang et al. Deep image registration with depth-aware homography estimation
Wang et al. Guiding local feature matching with surface curvature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant