CN112036389A - Vehicle three-dimensional information detection method, device and equipment and readable storage medium - Google Patents

Vehicle three-dimensional information detection method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN112036389A
CN112036389A CN202011239442.3A CN202011239442A CN112036389A CN 112036389 A CN112036389 A CN 112036389A CN 202011239442 A CN202011239442 A CN 202011239442A CN 112036389 A CN112036389 A CN 112036389A
Authority
CN
China
Prior art keywords
vehicle
image
image coordinates
visible
side edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011239442.3A
Other languages
Chinese (zh)
Other versions
CN112036389B (en
Inventor
彭欣亮
王曦
程士庆
徐振南
刘孟绅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Tiantong Weishi Electronic Technology Co ltd
Original Assignee
Tianjin Tiantong Weishi Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Tiantong Weishi Electronic Technology Co ltd filed Critical Tianjin Tiantong Weishi Electronic Technology Co ltd
Priority to CN202011239442.3A priority Critical patent/CN112036389B/en
Publication of CN112036389A publication Critical patent/CN112036389A/en
Application granted granted Critical
Publication of CN112036389B publication Critical patent/CN112036389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device, equipment and a readable storage medium for detecting three-dimensional information of a vehicle, wherein the method comprises the following steps: the method comprises the steps that an actual scene image collected by a monocular camera is used as input of a pre-trained deep learning model, output information of the deep learning model is obtained, and the output information comprises four secondary classification values which respectively represent whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and a road surface; determining the orientation of the vehicle according to the four second classification values; and when the determined vehicle orientation is a composite orientation, calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera. Compared with the method for marking the vehicle course angle, the method has the advantages that the marking difficulty of the training image is reduced, the three-dimensional information of the vehicle is detected based on the single-frame image of the monocular camera, the cost is reduced, and the detection stability is improved.

Description

Vehicle three-dimensional information detection method, device and equipment and readable storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a vehicle three-dimensional information detection method, a vehicle three-dimensional information detection device, a vehicle three-dimensional information detection equipment and a readable storage medium.
Background
For autonomous vehicles, it is important to understand the surrounding traffic environment. During traveling, the autonomous vehicle needs to detect not only surrounding vehicles but also three-dimensional information such as the position and orientation of the vehicle. At present, for the three-dimensional information detection of vehicles, a binocular camera or a depth camera is adopted for detection, but the cost is high; and training a vehicle three-dimensional information detection model by using the labeled data of the laser radar, and detecting the three-dimensional information of surrounding vehicles by using the trained model in an actual scene according to the scanning data of the laser radar, but the laser radar has the problem of low stability.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a device and a readable storage medium for detecting three-dimensional information of a vehicle, which are based on a single frame image of a monocular camera, so as to achieve the purposes of reducing cost and improving detection stability.
In order to achieve the above object, the following solutions are proposed:
in a first aspect, a method for detecting three-dimensional information of a vehicle is provided, which includes:
acquiring an actual scene image acquired by a monocular camera;
taking the actual scene image as input data of a pre-trained deep learning model to obtain output information of the deep learning model, wherein the output information comprises four secondary classification values which respectively represent whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and a road surface;
determining the orientation of the vehicle according to the four secondary classification values;
and when the determined vehicle orientation is a composite orientation, calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera.
Optionally, the training process of the deep learning model includes:
generating four secondary classification values respectively representing whether four vehicle lateral edges are visible or not according to the vehicle direction marked by the user in the training image;
determining that the image coordinates of the intersection points of the invisible vehicle side edges and the road surface are all zero;
obtaining image coordinates of the intersection point of the side edge of the visible vehicle and the road surface according to the rectangular frame and the side edge marked on the training image, wherein the side edge of the rectangular frame is parallel to the side edge of the training image, and the side edge and the two wheels on the side surface are tangent to the grounding points of the two wheels;
and taking the training image, the four corresponding secondary classification values and the image coordinates of the intersection points of the four vehicle side edges and the road surface as a training sample to train the deep learning model.
Optionally, the obtaining of the image coordinates of the intersection point of the visible vehicle side edge and the road surface according to the rectangular frame and the side edge marked on the training image specifically includes:
calculating to obtain a linear equation of the two vertical lines of the rectangular frame in the image coordinate system;
determining a linear equation of the side edge according to the image coordinates of the two end points of the side edge;
when the number of the visible vehicle side edges is two, determining image coordinates of intersection points of the two visible vehicle side edges and the road surface, wherein the image coordinates are respectively the image coordinates of the intersection points of the left vertical line of the rectangular frame and the side edge and the image coordinates of the intersection points of the right vertical line of the rectangular frame and the side edge;
when the number of the visible vehicle side edges is three, determining image coordinates of intersections of the road surface and the two visible vehicle side edges which are in the same plane as the side edges, and image coordinates of intersections of the side edges and the two visible vehicle side edges which are in the same plane as the side edges, respectively, and determining image coordinates of intersections of the road surface and the visible vehicle side edges which are not in the same plane as the side edges according to the orientation of the vehicle in the training image marked by the user, wherein the image coordinates are one of image coordinates of a lower left vertex of the rectangular frame and image coordinates of a lower right vertex of the rectangular frame.
Optionally, when there are two visible vehicle lateral edges, if the training image does not include the side marked by the user, determining that the image coordinates of the intersection points of the two visible vehicle lateral edges and the road surface are the image coordinates of the left lower vertex of the rectangular frame and the image coordinates of the right lower vertex of the rectangular frame, respectively.
In a second aspect, there is provided a vehicle three-dimensional information detection apparatus including:
the image acquisition unit is used for acquiring an actual scene image acquired by the monocular camera;
the image analysis unit is used for taking the actual scene image as input data of a pre-trained deep learning model to obtain output information of the deep learning model, wherein the output information comprises four secondary classification values which respectively represent whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and a road surface;
a vehicle orientation determining unit for determining a vehicle orientation according to the four secondary classification values;
and the vehicle heading angle calculation unit is used for calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera when the determined vehicle heading is a composite heading.
Optionally, the vehicle three-dimensional information detection device further includes a training unit, where the training unit specifically includes:
the secondary classification value subunit is used for generating four secondary classification values respectively representing whether the four vehicle lateral edges are visible or not according to the vehicle direction marked by the user in the training image;
the first image coordinate determination subunit is used for determining that the image coordinates of the intersection points of the invisible vehicle side edges and the road surface are all zero;
the second image coordinate determination subunit is used for obtaining image coordinates of the intersection point of the side edge of the visible vehicle and the road surface according to the rectangular frame and the side edge marked on the training image, wherein the side edge of the rectangular frame is parallel to the side edge of the training image, and the side edge and the two wheels on the side surface are tangent to the grounding points of the two wheels;
and the training subunit is used for taking the training image, the four corresponding secondary classification values and the image coordinates of the intersection points of the four vehicle side edges and the road surface as a training sample to train the deep learning model.
Optionally, the second image coordinate determination subunit is specifically configured to:
calculating to obtain a linear equation of the two vertical lines of the rectangular frame in the image coordinate system;
determining a linear equation of the side edge according to the image coordinates of the two end points of the side edge;
when the number of the visible vehicle side edges is two, determining image coordinates of intersection points of the two visible vehicle side edges and the road surface, wherein the image coordinates are respectively the image coordinates of the intersection points of the left vertical line of the rectangular frame and the side edge and the image coordinates of the intersection points of the right vertical line of the rectangular frame and the side edge;
when the number of the visible vehicle side edges is three, determining image coordinates of intersections of the road surface and the two visible vehicle side edges which are in the same plane as the side edges, and image coordinates of intersections of the side edges and the two visible vehicle side edges which are in the same plane as the side edges, respectively, and determining image coordinates of intersections of the road surface and the visible vehicle side edges which are not in the same plane as the side edges according to the orientation of the vehicle in the training image marked by the user, wherein the image coordinates are one of image coordinates of a lower left vertex of the rectangular frame and image coordinates of a lower right vertex of the rectangular frame.
Optionally, the training unit further includes:
and the third image coordinate determining subunit is configured to determine, when there are two visible vehicle side edges, if the training image does not include the side edge labeled by the user, that the image coordinates of the intersection point of the two visible vehicle side edges and the road surface are the image coordinate of the lower left vertex of the rectangular frame and the image coordinate of the lower right vertex of the rectangular frame, respectively.
In a third aspect, there is provided a readable storage medium having stored thereon a program that, when executed by a processor, implements the steps of any one of the vehicle three-dimensional information detection methods as in the first aspect.
In a fourth aspect, there is provided a vehicle three-dimensional information detecting apparatus including: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the steps of any one of the vehicle three-dimensional information detection methods according to the first aspect.
Compared with the prior art, the technical scheme of the invention has the following advantages:
the technical scheme provides a vehicle three-dimensional information detection method, a device, equipment and a readable storage medium, and the method comprises the following steps: the method comprises the steps that an actual scene image collected by a monocular camera is used as input of a pre-trained deep learning model, output information of the deep learning model is obtained, and the output information comprises four secondary classification values which respectively represent whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and a road surface; determining the orientation of the vehicle according to the four second classification values; and when the determined vehicle orientation is a composite orientation, calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera. The output information of the deep learning model is four binary classification values and image coordinate values, compared with the method for marking the vehicle course angle, the marking difficulty of the training image is reduced, the three-dimensional information of the vehicle is detected by a single-frame image based on the monocular camera, and therefore the cost is reduced and the detection stability is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting three-dimensional information of a vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic view of a vehicle oriented to the left according to an embodiment of the present invention;
FIG. 3 is a schematic view of a vehicle according to an embodiment of the present invention oriented to the front left;
FIG. 4 is a flowchart of a deep learning model training method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a vehicle three-dimensional information detection device according to an embodiment of the invention;
fig. 6 is a schematic diagram of a vehicle three-dimensional information detection device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the method for detecting three-dimensional information of a vehicle provided in this embodiment may be applied to a controller with data processing capability, such as an automatic driving controller or other driving computers; in some cases, the method can also be applied to a server on the network side, and the method can comprise the following steps:
s11: and acquiring an actual scene image acquired by the monocular camera.
And in the automatic driving process of the vehicle, the corresponding equipment acquires actual scene data acquired by the monocular camera in real time.
S12: and taking the actual scene image as input data of a pre-trained deep learning model to obtain output information of the deep learning model, wherein the output information comprises four binary classification values respectively representing whether the four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and the road surface.
The six faces of the vehicle are the left side, right side, front, back, ground and top. The intersection line of the left side surface and the front surface is a left front side edge; the intersection line of the left side surface and the back surface is a left back side edge; the intersection line of the right side surface and the front surface is a right front side edge; the intersection line of the right side surface and the back surface is a back front side edge. The four vehicle lateral edges are specifically a left front lateral edge, a left rear lateral edge, a right front lateral edge and a right rear lateral edge. The four dichotomous values indicate whether the left front edge, the left back edge, the right front edge, and the right back edge, respectively, are visible in the image.
S13: the vehicle heading is determined based on four binary classification values indicating whether four vehicle lateral edges are visible.
Presetting the corresponding relation of the vehicle orientations corresponding to four different classification values; when step S13 is executed, the vehicle orientations corresponding to the four binary values are matched according to the preset correspondence. The vehicle orientation is referred to as a single orientation when it is front, rear, left, and right, and the vehicle orientation is referred to as a compound orientation when it is left rear, right front, left front, and right rear. The correspondence relationship between the vehicle orientations corresponding to the four binary values is specifically shown in the following table, where 1 represents visible and 0 represents invisible:
Figure 387384DEST_PATH_IMAGE001
s14: and when the determined vehicle orientation is a composite orientation, calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera.
The external reference matrix R represents the rotation amount of three axes of the monocular camera coordinate system xyz relative to three axes of the world coordinate system xyz; the internal reference matrix K represents mapping transformation from a three-dimensional object under a monocular camera coordinate system to an image of the object, and comprises parameters such as focal length, optical center and resolution of the camera. The external reference matrix R and the internal reference matrix K are multiplied to obtain a 3 × 3 camera matrix, i.e., a = K × R, where a is the 3 × 3 camera matrix.
The actual length and width of the vehicle can be obtained according to the preset corresponding relation between the vehicle type and the actual length and width of the vehicle. In the automatic driving process of the vehicle, according to a vehicle type recognition model trained in advance, recognizing an actual scene image collected by a monocular camera to obtain a vehicle type; and then matching the actual length and width corresponding to the recognized vehicle type according to the preset corresponding relation.
The vehicle heading angle specifically refers to an included angle between the positive direction of the vehicle head of the vehicle to which the method is applied and the positive direction of the vehicle head of the vehicle identified in the actual scene image. When the determined vehicle orientation is a single orientation, the vehicle heading angle can be directly determined; for example, if the vehicle is oriented forwards, determining that the vehicle heading angle is 0 degree; determining that the heading angle of the vehicle is 180 degrees if the vehicle is in the rear direction; if the vehicle orientation is left, determining that the vehicle heading angle is 270 degrees; the vehicle heading is right, then the vehicle heading angle is determined to be 90 °. When the determined vehicle orientation is a composite orientation, the following vehicle heading angle calculation formula is specifically adopted for calculation:
Figure 410704DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 386750DEST_PATH_IMAGE003
is the vehicle heading angle; when the vehicle is oriented to one of the left front and the right rear, S takes a value of 1; when the vehicle is oriented to be one of left rear and right front, S is-1;v w is the actual width of the vehicle;v l is the actual length of the vehicle;x m the abscissa of the vehicle lateral edge closest to the monocular camera, for example when the vehicle is oriented left-front,x m is the abscissa of the left anterior lateral edge;x l being the abscissa of the other lateral edge of the visible side of the vehicle, for example, when the vehicle is oriented to the left front, the visible side comprises the left side and the front,x l is the abscissa of the left rear edge on the left lateral surface;x w the abscissa of the other lateral edge of the visible front or rear face of the vehicle, for example when the vehicle is oriented to the left front,x w the abscissa of the right front lateral edge on the front face.A ij Is the first in the matrix AiGo to the firstjThe elements of the column.
In some embodiments, the structure of the deep learning model specifically uses a 2-stage detection network framework, uses an open-source detection library of Mmdetection, and the backbone uses an Xception network. Mmdetection is a Pythrch-based library that can be used to reduce the workload of many codes; for example, the underlying implementations of various convolutional layers, batch normalization layers, pooling layers, and the like used in the deep learning model can be called directly from Mmdetection.
In some embodiments, the user labels the rectangular frame and sides of the vehicle directly in the training image. The sides of the rectangular frame are parallel to the sides of the training image. The side edge is the bottom edge of the side surface, and the side edge and the two wheels of the side surface are tangent to the grounding points of the two wheels. The rectangular frame is a two-dimensional boundary frame which can just frame the vehicle. Referring to fig. 2 and 3, H denotes a rectangular frame and L denotes a side. The user also annotates the vehicle orientation in the training image, which describes the face of the vehicle that can be seen in the training image. Vehicle orientation includes front, rear, left, right, left front, left rear, right front, and right rear. For example, the vehicle orientation is left in fig. 2 and left-front in fig. 3.
Referring to fig. 4, the training process of the deep learning model includes the following steps:
s41: and generating four binary classification values respectively representing whether the four vehicle side edges are visible or not according to the vehicle direction marked by the user in the training image.
Step S41 is executed to determine four binary classification values, each indicating whether four vehicle lateral edges are visible, corresponding to the vehicle direction marked by the user, based on the correspondence between the preset vehicle direction and the four binary classification values.
S42: and determining that the image coordinates of the intersection points of the invisible vehicle side edges and the road surface are all zero.
For example, if neither the front right edge nor the rear right edge is visible in fig. 2, both the abscissa and ordinate on the image that determine the intersection of the front right edge and the road surface are zero. The abscissa value of a certain point on the image is the number of pixels which are laterally away from the original point (0, 0) of the image, and the right direction is positive; the ordinate value of a certain point on the image is the number of pixels from the origin (0, 0) of the image in the longitudinal direction, and the lower direction is positive.
S43: and obtaining the image coordinates of the intersection point of the visible vehicle side edge and the road surface according to the rectangular frame and the side edge marked on the training image.
Calculating the image coordinates of the intersection point of the visible vehicle side edge and the road surface according to the rectangular frame and the side edge, wherein the image coordinates specifically comprise the following contents:
firstly, a linear equation of two vertical lines of the rectangular frame in an image coordinate system is calculated.
The left and right vertical lines of the rectangular frame H in fig. 2 and 3 are two vertical lines of the rectangular frame H. According to the method, the abscissa values of all points in the vertical line on the image are the same, so that the linear equation of the vertical line on the image coordinate system can be obtained according to the abscissa of one point.
Secondly, determining a linear equation of the side according to the image coordinates of the two end points of the side.
If the image coordinates of the two end points of the side edge are (x1, y1), (x2, y2), respectively. From the image coordinates of these two points, the equation of a straight line passing through the side edge in the image coordinate system can be determined to be y = k1 x + b1, where k1= (y2-y1)/(x2-x1), b1= (y1 x2-x1 y2)/(x2-x 1).
And thirdly, when the number of the visible vehicle side edges is two, determining the image coordinates of the intersection points of the two visible vehicle side edges and the road surface, namely the image coordinates of the intersection point of the left vertical line and the side edge of the rectangular frame and the image coordinates of the intersection point of the right vertical line and the side edge of the rectangular frame.
Specifically, the image coordinate of the intersection point of the left vertical line and the side of the rectangular frame is calculated according to a linear equation of the left vertical line of the rectangular frame in the image coordinate system and a linear equation of the side in the image coordinate system. Calculating to obtain the intersection of the right vertical line and the side of the rectangular frame according to the linear equation of the right vertical line of the rectangular frame in the image coordinate system and the linear equation of the side in the image coordinate systemThe image coordinates of the points. For example, the vehicle orientation is left, the visible vehicle lateral edges are a left front lateral edge and a left rear lateral edge, and the equation of a straight line of the left front lateral edge in the image coordinate system is x = xminAnd the linear equation of the side in the image coordinate system is y = k1 × x + b1, and the image coordinate of the intersection of the left vertical line of the rectangular frame and the side is (x)min,k1*xmin+b1)。
And finally, when the number of the visible vehicle side edges is three, determining image coordinates of intersection points of the two visible vehicle side edges which are positioned on the same surface with the side edges and the road surface, and image coordinates of intersection points of the two visible vehicle side edges which are positioned on the same surface with the side edges and the side edges respectively, and determining image coordinates of intersection points of the visible vehicle side edges which are not positioned on the same surface with the side edges and the road surface according to the direction of the vehicle in a training image marked by a user, wherein the image coordinates are one of image coordinates of a lower left vertex of a rectangular frame and image coordinates of a lower right vertex of the rectangular frame.
For example, when the visible vehicle side edges are a left front side edge, a left rear side edge and a right front side edge, the left front side edge and the left rear side edge are two visible vehicle side edges in the same plane, and the right front side edge is a visible vehicle side edge which is not in the same plane as the side edge; and determining the image coordinates of the intersection point of the right front side edge and the road surface, wherein the image coordinates are the image coordinates of the lower left vertex of the rectangular frame.
S44: and taking the training image, the corresponding four secondary classification values and the image coordinates of the intersection points of the four vehicle side edges and the road surface as a training sample to train the deep learning model.
In some embodiments, the optimizer trains the deep learning model using a SGD (stochastic gradient descent algorithm).
In some embodiments, if the vehicle is oriented in a single direction and the side edge coincides with one edge of the rectangular frame, the user may not label the side edge. And during subsequent training of the deep learning model, when the number of the visible vehicle side edges is two, if the training image does not include the side edge marked by the user, determining that the image coordinates of the intersection points of the two visible vehicle side edges and the road surface are the image coordinates of the left lower vertex of the rectangular frame and the image coordinates of the right lower vertex of the rectangular frame respectively.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present invention is not limited by the illustrated ordering of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the invention.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Referring to fig. 5, the vehicle three-dimensional information detection apparatus provided for the present embodiment includes: an image acquisition unit 51, an image analysis unit 52, a vehicle orientation determination unit 53, and a vehicle heading angle calculation unit 54.
And an image acquiring unit 51 for acquiring an actual scene image captured by the monocular camera.
And the image analysis unit 52 is configured to use the actual scene image as input data of a pre-trained deep learning model to obtain output information of the deep learning model, where the output information includes four binary classification values respectively indicating whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and the road surface.
A vehicle orientation determining unit 53 for determining the vehicle orientation based on the four two classification values.
And the vehicle heading angle calculation unit 54 is used for calculating a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle, and the internal reference matrix and the external reference matrix of the monocular camera when the determined vehicle heading is a composite heading.
In some specific embodiments, the vehicle three-dimensional information detection device further includes a training unit, and the training unit specifically includes: the system comprises a two classification value subunit, a first image coordinate determination subunit, a second image coordinate determination subunit and a training subunit.
And the two-classification-value subunit is used for generating four two classification values respectively representing whether the four vehicle lateral edges are visible or not according to the vehicle direction marked by the user in the training image.
And the first image coordinate determining subunit is used for determining that the image coordinates of the intersection points of the invisible vehicle side edges and the road surface are all zero.
And the second image coordinate determination subunit is used for obtaining the image coordinate of the intersection point of the visible vehicle side edge and the road surface according to the rectangular frame and the side edge marked on the training image, wherein the side edge of the rectangular frame is parallel to the side edge of the training image, and the side edge and the two wheels on the side surface are tangent to the grounding points of the two wheels.
And the training subunit is used for training the deep learning model by taking the training image, the four corresponding two-class values and the image coordinates of the intersection points of the four vehicle side edges and the road surface as a training sample.
In some embodiments, the second image coordinates determine a subunit, specifically configured to:
calculating to obtain a linear equation of the two vertical lines of the rectangular frame in the image coordinate system;
determining a linear equation of the side edge according to the image coordinates of the two end points of the side edge;
when the number of the visible vehicle side edges is two, determining image coordinates of intersection points of the two visible vehicle side edges and the road surface, wherein the image coordinates are respectively the image coordinates of the intersection points of the left vertical line of the rectangular frame and the side edge and the image coordinates of the intersection points of the right vertical line of the rectangular frame and the side edge;
when the number of the visible vehicle side edges is three, determining image coordinates of intersections of the road surface and the two visible vehicle side edges which are in the same plane as the side edges, and image coordinates of intersections of the side edges and the two visible vehicle side edges which are in the same plane as the side edges, respectively, and determining image coordinates of intersections of the road surface and the visible vehicle side edges which are not in the same plane as the side edges according to the orientation of the vehicle in the training image marked by the user, wherein the image coordinates are one of image coordinates of a lower left vertex of the rectangular frame and image coordinates of a lower right vertex of the rectangular frame.
In some embodiments, the training unit further comprises: and the third image coordinate determining subunit is used for determining that the image coordinates of the intersection points of the two visible vehicle side edges and the road surface are the image coordinates of the left lower vertex of the rectangular frame and the image coordinates of the right lower vertex of the rectangular frame respectively if the training image does not include the side marked by the user when the number of the visible vehicle side edges is two.
The embodiment provides a vehicle three-dimensional information detection device, which can be specifically an automatic driving controller or other controllers with data processing capability, such as a driving computer; in some cases, it may also be a server on the network side. The server can be one or more of a rack server, a blade server, a tower server and a cabinet server. Referring to fig. 6, a vehicle three-dimensional information detection apparatus is provided for the present embodiment, and a hardware structure thereof may include: at least one processor 61, at least one communication interface 62, at least one memory 63 and at least one communication bus 64; and the processor 61, the communication interface 62 and the memory 63 are communicated with each other through a communication bus 64.
Processor 61 may be, in some embodiments, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), one or more Integrated circuits configured to implement embodiments of the present invention, or the like.
The communication interface 62 may include a standard wired interface, a wireless interface (e.g., WI-FI interface). The method is generally used for establishing communication connection between the vehicle three-dimensional information detection device and other electronic devices or systems.
The memory 63 includes at least one type of readable storage medium. The readable storage medium may be an NVM (non-volatile memory) such as flash memory, hard disk, multimedia card, card-type memory, etc. The readable storage medium may also be a high-speed RAM (random access memory) memory. The readable storage medium may be an internal storage unit of the data verification device in some embodiments. In other embodiments, the readable storage medium may also be an external storage device to the data verification device.
Wherein the memory 63 stores a computer program, and the processor 61 may call the computer program stored in the memory 63, the computer program being configured to:
acquiring an actual scene image acquired by a monocular camera;
the method comprises the steps that an actual scene image is used as input data of a pre-trained deep learning model, output information of the deep learning model is obtained, and the output information comprises four secondary classification values which respectively represent whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and a road surface;
determining the orientation of the vehicle according to the four second classification values;
and when the determined vehicle orientation is a composite orientation, calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera.
The detailed function and the extended function of the program can be referred to the above description.
FIG. 6 only shows a vehicle three-dimensional information detection device having components 61-64, but it should be understood that not all of the shown components are required and that more or fewer components may alternatively be implemented.
An embodiment of the present invention further provides a readable storage medium, where the readable storage medium may store a program adapted to be executed by a processor, where the program is configured to:
acquiring an actual scene image acquired by a monocular camera;
the method comprises the steps that an actual scene image is used as input data of a pre-trained deep learning model, output information of the deep learning model is obtained, and the output information comprises four secondary classification values which respectively represent whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and a road surface;
determining the orientation of the vehicle according to the four second classification values;
and when the determined vehicle orientation is a composite orientation, calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera.
The refinement function and the extension function of the program may be referred to as described above.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are mainly described as different from other embodiments, the same and similar parts in the embodiments may be referred to each other, and the features described in the embodiments in the present description may be replaced with each other or combined with each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A vehicle three-dimensional information detection method is characterized by comprising the following steps:
acquiring an actual scene image acquired by a monocular camera;
taking the actual scene image as input data of a pre-trained deep learning model to obtain output information of the deep learning model, wherein the output information comprises four secondary classification values which respectively represent whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and a road surface;
determining the orientation of the vehicle according to the four secondary classification values;
and when the determined vehicle orientation is a composite orientation, calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera.
2. The vehicle three-dimensional information detection method according to claim 1, wherein the training process of the deep learning model comprises:
generating four secondary classification values respectively representing whether four vehicle lateral edges are visible or not according to the vehicle direction marked by the user in the training image;
determining that the image coordinates of the intersection points of the invisible vehicle side edges and the road surface are all zero;
obtaining image coordinates of the intersection point of the side edge of the visible vehicle and the road surface according to the rectangular frame and the side edge marked on the training image, wherein the side edge of the rectangular frame is parallel to the side edge of the training image, and the side edge and the two wheels on the side surface are tangent to the grounding points of the two wheels;
and taking the training image, the four corresponding secondary classification values and the image coordinates of the intersection points of the four vehicle side edges and the road surface as a training sample to train the deep learning model.
3. The vehicle three-dimensional information detection method according to claim 2, wherein the obtaining of the image coordinates of the intersection point of the visible vehicle side edge and the road surface according to the rectangular frame and the side edge marked on the training image specifically comprises:
calculating to obtain a linear equation of the two vertical lines of the rectangular frame in the image coordinate system;
determining a linear equation of the side edge according to the image coordinates of the two end points of the side edge;
when the number of the visible vehicle side edges is two, determining image coordinates of intersection points of the two visible vehicle side edges and the road surface, wherein the image coordinates are respectively the image coordinates of the intersection points of the left vertical line of the rectangular frame and the side edge and the image coordinates of the intersection points of the right vertical line of the rectangular frame and the side edge;
when the number of the visible vehicle side edges is three, determining image coordinates of intersections of the road surface and the two visible vehicle side edges which are in the same plane as the side edges, and image coordinates of intersections of the side edges and the two visible vehicle side edges which are in the same plane as the side edges, respectively, and determining image coordinates of intersections of the road surface and the visible vehicle side edges which are not in the same plane as the side edges according to the orientation of the vehicle in the training image marked by the user, wherein the image coordinates are one of image coordinates of a lower left vertex of the rectangular frame and image coordinates of a lower right vertex of the rectangular frame.
4. The vehicle three-dimensional information detection method according to claim 2, wherein when there are two visible vehicle lateral edges, if the training image does not include a side labeled by the user, it is determined that image coordinates of intersections of the two visible vehicle lateral edges and the road surface are image coordinates of a lower left vertex of the rectangular frame and image coordinates of a lower right vertex of the rectangular frame, respectively.
5. A vehicle three-dimensional information detection device characterized by comprising:
the image acquisition unit is used for acquiring an actual scene image acquired by the monocular camera;
the image analysis unit is used for taking the actual scene image as input data of a pre-trained deep learning model to obtain output information of the deep learning model, wherein the output information comprises four secondary classification values which respectively represent whether four vehicle lateral edges are visible or not and image coordinates of intersection points of the four vehicle lateral edges and a road surface;
a vehicle orientation determining unit for determining a vehicle orientation according to the four secondary classification values;
and the vehicle heading angle calculation unit is used for calculating to obtain a vehicle heading angle according to the image coordinates of the intersection point of the visible vehicle side edge and the road surface, the actual length and width of the vehicle and the internal reference matrix and the external reference matrix of the monocular camera when the determined vehicle heading is a composite heading.
6. The vehicle three-dimensional information detection device according to claim 5, further comprising a training unit, wherein the training unit specifically includes:
the secondary classification value subunit is used for generating four secondary classification values respectively representing whether the four vehicle lateral edges are visible or not according to the vehicle direction marked by the user in the training image;
the first image coordinate determination subunit is used for determining that the image coordinates of the intersection points of the invisible vehicle side edges and the road surface are all zero;
the second image coordinate determination subunit is used for obtaining image coordinates of the intersection point of the side edge of the visible vehicle and the road surface according to the rectangular frame and the side edge marked on the training image, wherein the side edge of the rectangular frame is parallel to the side edge of the training image, and the side edge and the two wheels on the side surface are tangent to the grounding points of the two wheels;
and the training subunit is used for taking the training image, the four corresponding secondary classification values and the image coordinates of the intersection points of the four vehicle side edges and the road surface as a training sample to train the deep learning model.
7. The vehicle three-dimensional information detection apparatus according to claim 6, wherein the second image coordinate determination subunit is specifically configured to:
calculating to obtain a linear equation of the two vertical lines of the rectangular frame in the image coordinate system;
determining a linear equation of the side edge according to the image coordinates of the two end points of the side edge;
when the number of the visible vehicle side edges is two, determining image coordinates of intersection points of the two visible vehicle side edges and the road surface, wherein the image coordinates are respectively the image coordinates of the intersection points of the left vertical line of the rectangular frame and the side edge and the image coordinates of the intersection points of the right vertical line of the rectangular frame and the side edge;
when the number of the visible vehicle side edges is three, determining image coordinates of intersections of the road surface and the two visible vehicle side edges which are in the same plane as the side edges, and image coordinates of intersections of the side edges and the two visible vehicle side edges which are in the same plane as the side edges, respectively, and determining image coordinates of intersections of the road surface and the visible vehicle side edges which are not in the same plane as the side edges according to the orientation of the vehicle in the training image marked by the user, wherein the image coordinates are one of image coordinates of a lower left vertex of the rectangular frame and image coordinates of a lower right vertex of the rectangular frame.
8. The vehicle three-dimensional information detection device according to claim 6, characterized in that the training unit further includes:
and the third image coordinate determining subunit is configured to determine, when there are two visible vehicle side edges, if the training image does not include the side edge labeled by the user, that the image coordinates of the intersection point of the two visible vehicle side edges and the road surface are the image coordinate of the lower left vertex of the rectangular frame and the image coordinate of the lower right vertex of the rectangular frame, respectively.
9. A readable storage medium on which a program is stored, the program realizing the steps of the vehicle three-dimensional information detection method according to any one of claims 1 to 4 when executed by a processor.
10. A vehicle three-dimensional information detection apparatus characterized by comprising: a memory and a processor;
the memory is used for storing programs;
the processor is used for executing the program and realizing the steps of the vehicle three-dimensional information detection method according to any one of claims 1 to 4.
CN202011239442.3A 2020-11-09 2020-11-09 Vehicle three-dimensional information detection method, device and equipment and readable storage medium Active CN112036389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011239442.3A CN112036389B (en) 2020-11-09 2020-11-09 Vehicle three-dimensional information detection method, device and equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011239442.3A CN112036389B (en) 2020-11-09 2020-11-09 Vehicle three-dimensional information detection method, device and equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112036389A true CN112036389A (en) 2020-12-04
CN112036389B CN112036389B (en) 2021-02-02

Family

ID=73572786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011239442.3A Active CN112036389B (en) 2020-11-09 2020-11-09 Vehicle three-dimensional information detection method, device and equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112036389B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734831A (en) * 2021-01-04 2021-04-30 广州小鹏自动驾驶科技有限公司 Labeling method and device
CN112733697A (en) * 2021-01-04 2021-04-30 广州小鹏自动驾驶科技有限公司 Method and device for determining yaw angle of vehicle
CN112784705A (en) * 2021-01-04 2021-05-11 广州小鹏自动驾驶科技有限公司 Vehicle side edge determining method and device
CN112926378A (en) * 2021-01-04 2021-06-08 广州小鹏自动驾驶科技有限公司 Vehicle side edge determining method and device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944390A (en) * 2017-11-24 2018-04-20 西安科技大学 Motor-driven vehicle going objects in front video ranging and direction localization method
CN108759667A (en) * 2018-05-29 2018-11-06 福州大学 Front truck distance measuring method based on monocular vision and image segmentation under vehicle-mounted camera
CN109145928A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 It is a kind of based on the headstock of image towards recognition methods and device
EP3495993A1 (en) * 2017-12-11 2019-06-12 Continental Automotive GmbH Road marking determining apparatus for automated driving
US10346969B1 (en) * 2018-01-02 2019-07-09 Amazon Technologies, Inc. Detecting surface flaws using computer vision
CN110084230A (en) * 2019-04-11 2019-08-02 北京百度网讯科技有限公司 Vehicle body direction detection method and device based on image
CN110706271A (en) * 2019-09-30 2020-01-17 清华大学 Vehicle-mounted vision real-time multi-vehicle-mounted target transverse and longitudinal distance estimation method
CN110780358A (en) * 2019-10-23 2020-02-11 重庆长安汽车股份有限公司 Method, system, computer-readable storage medium and vehicle for autonomous driving weather environment recognition
CN111081033A (en) * 2019-11-21 2020-04-28 北京百度网讯科技有限公司 Method and device for determining orientation angle of vehicle
US20200192378A1 (en) * 2018-07-13 2020-06-18 Kache.AI System and method for automatically switching a vehicle to follow in a vehicle's autonomous driving mode
CN111310574A (en) * 2020-01-17 2020-06-19 清华大学 Vehicle-mounted visual real-time multi-target multi-task joint sensing method and device
CN111723704A (en) * 2020-06-09 2020-09-29 杭州古德微机器人有限公司 Raspberry pie-based van body door opening monitoring method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145928A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 It is a kind of based on the headstock of image towards recognition methods and device
CN107944390A (en) * 2017-11-24 2018-04-20 西安科技大学 Motor-driven vehicle going objects in front video ranging and direction localization method
EP3495993A1 (en) * 2017-12-11 2019-06-12 Continental Automotive GmbH Road marking determining apparatus for automated driving
US10346969B1 (en) * 2018-01-02 2019-07-09 Amazon Technologies, Inc. Detecting surface flaws using computer vision
CN108759667A (en) * 2018-05-29 2018-11-06 福州大学 Front truck distance measuring method based on monocular vision and image segmentation under vehicle-mounted camera
US20200192378A1 (en) * 2018-07-13 2020-06-18 Kache.AI System and method for automatically switching a vehicle to follow in a vehicle's autonomous driving mode
CN110084230A (en) * 2019-04-11 2019-08-02 北京百度网讯科技有限公司 Vehicle body direction detection method and device based on image
CN110706271A (en) * 2019-09-30 2020-01-17 清华大学 Vehicle-mounted vision real-time multi-vehicle-mounted target transverse and longitudinal distance estimation method
CN110780358A (en) * 2019-10-23 2020-02-11 重庆长安汽车股份有限公司 Method, system, computer-readable storage medium and vehicle for autonomous driving weather environment recognition
CN111081033A (en) * 2019-11-21 2020-04-28 北京百度网讯科技有限公司 Method and device for determining orientation angle of vehicle
CN111310574A (en) * 2020-01-17 2020-06-19 清华大学 Vehicle-mounted visual real-time multi-target multi-task joint sensing method and device
CN111723704A (en) * 2020-06-09 2020-09-29 杭州古德微机器人有限公司 Raspberry pie-based van body door opening monitoring method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FLORIAN CHABOT ET AL.: "Deep MANTA: A Coarse-to-fine Many-Task Network for joint 2D and 3D vehicle analysis from monocular image", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
WANKOU YANG ET AL.: "A multi-task Faster R-CNN method for 3D vehicle detection based on a single image", 《ELSEVIER》 *
徐晓娟 等: "基于单目序列图像的车辆三维信息的获取", 《电子设计工程》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734831A (en) * 2021-01-04 2021-04-30 广州小鹏自动驾驶科技有限公司 Labeling method and device
CN112733697A (en) * 2021-01-04 2021-04-30 广州小鹏自动驾驶科技有限公司 Method and device for determining yaw angle of vehicle
CN112784705A (en) * 2021-01-04 2021-05-11 广州小鹏自动驾驶科技有限公司 Vehicle side edge determining method and device
CN112926378A (en) * 2021-01-04 2021-06-08 广州小鹏自动驾驶科技有限公司 Vehicle side edge determining method and device
CN112733697B (en) * 2021-01-04 2022-05-13 广州小鹏自动驾驶科技有限公司 Method and device for determining yaw angle of vehicle

Also Published As

Publication number Publication date
CN112036389B (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112036389B (en) Vehicle three-dimensional information detection method, device and equipment and readable storage medium
CN109116374B (en) Method, device and equipment for determining distance of obstacle and storage medium
Moghadam et al. Fast vanishing-point detection in unstructured environments
CN108734162B (en) Method, system, equipment and storage medium for identifying target in commodity image
CN110363817B (en) Target pose estimation method, electronic device, and medium
US20140379257A1 (en) Method and device for detecting road region as well as method and device for detecting road line
EP2725520A2 (en) Method and apparatus for detecting road
US20130101170A1 (en) Method of image processing and device therefore
US9396553B2 (en) Vehicle dimension estimation from vehicle images
CN108734058B (en) Obstacle type identification method, device, equipment and storage medium
CN105313774B (en) Vehicle parking assistance device and its method of operation
CN108428248B (en) Vehicle window positioning method, system, equipment and storage medium
EP3968266B1 (en) Obstacle three-dimensional position acquisition method and apparatus for roadside computing device
CN110363179B (en) Map acquisition method, map acquisition device, electronic equipment and storage medium
CN107545223B (en) Image recognition method and electronic equipment
CN110852311A (en) Three-dimensional human hand key point positioning method and device
Guo et al. A parts-based method for articulated target recognition in laser radar data
US20200226392A1 (en) Computer vision-based thin object detection
CN112184799A (en) Lane line space coordinate determination method and device, storage medium and electronic equipment
WO2014188446A2 (en) Method and apparatus for image matching
CN114511865A (en) Method and device for generating structured information and computer readable storage medium
CN113592015A (en) Method and device for positioning and training feature matching network
CN112837404B (en) Method and device for constructing three-dimensional information of planar object
CN110705363B (en) Commodity specification identification method and device
CN112101139B (en) Human shape detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant