CN112926378B - Vehicle side edge determining method and device - Google Patents

Vehicle side edge determining method and device Download PDF

Info

Publication number
CN112926378B
CN112926378B CN202110005040.5A CN202110005040A CN112926378B CN 112926378 B CN112926378 B CN 112926378B CN 202110005040 A CN202110005040 A CN 202110005040A CN 112926378 B CN112926378 B CN 112926378B
Authority
CN
China
Prior art keywords
vehicle
output
edge
preset
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110005040.5A
Other languages
Chinese (zh)
Other versions
CN112926378A (en
Inventor
沈煜
刘兰个川
毛云翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202110005040.5A priority Critical patent/CN112926378B/en
Publication of CN112926378A publication Critical patent/CN112926378A/en
Application granted granted Critical
Publication of CN112926378B publication Critical patent/CN112926378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides a method and a device for determining a vehicle side edge, wherein the method comprises the following steps: obtaining a vehicle image and a pre-trained prediction model; acquiring output values of a plurality of output items output by a prediction model to a vehicle image; judging whether the middle edge is visible or not according to an output value of an output item representing the position of the vehicle; if the middle edge is determined to be visible, determining the direction of the target vehicle according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is visible; if the intermediate edge is determined to be invisible, determining the direction of the target vehicle according to the output value of the output item corresponding to the preset vehicle direction in which the intermediate edge is invisible; the position of the vehicle side edge is determined based on the target vehicle direction. In the embodiment of the invention, the unreasonable situation that four side edges of the vehicle can be predicted to be visible at the same time in the conventional prediction scheme can not occur.

Description

Vehicle side edge determining method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a vehicle side edge determining method and a vehicle side edge determining apparatus.
Background
By predicting the side edges of the vehicle, the yaw angle of the vehicle can be accurately estimated in the picture shot by the monocular camera, so that the prediction result of the yaw angle is very important for an automatic driving perception system.
However, the conventional side edge prediction scheme has the problems that the position of the side edge is not accurately predicted and the side edge is predicted in error.
Disclosure of Invention
In view of the above, embodiments of the present invention have been made to provide a vehicle side edge determining method and a corresponding vehicle side edge determining apparatus that overcome or at least partially solve the above-described problems.
In order to solve the above problem, an embodiment of the present invention discloses a vehicle side edge determining method, including:
obtaining a vehicle image and a pre-trained prediction model; the predictive model includes a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible;
acquiring output values of a plurality of output items output by the prediction model to the vehicle image;
judging whether the middle edge is visible or not according to the output value of the output item representing the position of the vehicle;
if the middle edge is determined to be visible, determining the direction of the target vehicle according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is visible;
if the intermediate edge is determined to be invisible, determining the direction of the target vehicle according to the output value of the output item corresponding to the preset vehicle direction in which the intermediate edge is invisible;
and determining the position of the vehicle side edge according to the target vehicle direction.
Optionally, the determining whether the middle edge is visible according to the output value of the output item representing the position of the vehicle includes:
determining that the middle edge is not visible if the output value of the output item representing the position of the vehicle does not have a numerical value;
if the output value of the output item representing the position of the vehicle has a numerical value, the center edge is determined to be visible.
Optionally, the determining the position of the vehicle side edge according to the target vehicle direction includes:
acquiring vehicle surrounding frame information of a vehicle image; the vehicle surrounding frame information includes an abscissa value of a surrounding frame;
determining a maximum abscissa value and a minimum abscissa value of the abscissa values of the bounding box;
and determining the position of the side edge of the vehicle in the vehicle image according to the direction of the target vehicle and the maximum and minimum abscissa values.
Optionally, the vehicle image is a vehicle image without a surrounding frame; the vehicle surrounding frame information for acquiring the vehicle image comprises:
and acquiring surrounding frame information of the vehicle image output by the prediction model.
Optionally, the vehicle image is a cropped vehicle image within an enclosure; the vehicle surrounding frame information is boundary position information of the vehicle image.
Optionally, the plurality of preset vehicle directions include forward, backward, left, right, left forward, right forward, left backward, right backward; the left forward direction, the right forward direction, the left backward direction and the right backward direction belong to a preset vehicle direction in which a middle edge is visible; the forward direction, the backward direction, the left direction and the right direction belong to the preset vehicle direction with invisible middle arrises.
Optionally, the method further comprises:
training the predictive model by:
obtaining a vehicle image sample and a predictive model, the predictive model comprising a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible;
obtaining output values of a plurality of output items output by the prediction model to the vehicle image sample;
calculating a first loss value according to corresponding output values for the plurality of output items respectively representing a plurality of preset vehicle directions;
for the output item representing the position of the middle edge, calculating a second loss value according to the corresponding output value;
and training the prediction model according to the first loss value and the second loss value.
Optionally, the calculating, for a plurality of output items respectively representing a plurality of preset vehicle directions, a first loss value according to corresponding output values includes:
for a plurality of output items respectively representing a plurality of preset vehicle directions, a first loss value is calculated using a preset first loss function and the corresponding output value.
Optionally, the vehicle image sample has an annotated vehicle direction; the calculating, for the output item representing the position of the middle edge, a second penalty value based on the corresponding output value, includes:
for the output item representing the position of the middle edge, if the marked vehicle direction belongs to a preset vehicle direction visible corresponding to the middle edge, calculating a second loss value by using a preset second loss function and the corresponding output value;
for the output item representing the position of the middle edge, if the noted vehicle direction belongs to a preset vehicle direction corresponding to the middle edge being invisible, then a second loss value is determined to be 0.
The embodiment of the invention also discloses a vehicle side edge determining device, which comprises:
the prediction content acquisition module is used for acquiring a vehicle image and a pre-trained prediction model; the predictive model includes a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible;
a first output value acquisition module for acquiring output values of a plurality of output items output to the vehicle image by the prediction model;
the middle edge judging module is used for judging whether the middle edge is visible or not according to the output value of the output item representing the position of the vehicle;
the first vehicle direction determining module is used for determining the direction of the target vehicle according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is visible if the middle edge is determined to be visible;
the second vehicle direction determining module is used for determining the target vehicle direction according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is invisible if the middle edge is determined to be invisible;
and the side edge position determining module is used for determining the position of the vehicle side edge according to the target vehicle direction.
Optionally, the middle edge determining module includes:
an invisibility determination sub-module for determining that the middle edge is invisible if the output value of the output item indicating the position of the vehicle does not have a numerical value;
a visibility determination submodule for determining that the center edge is visible if the output value of the output item indicating the position of the vehicle has a numerical value.
Optionally, the side edge position determining module includes:
the surrounding frame information acquisition submodule is used for acquiring vehicle surrounding frame information of the vehicle image; the vehicle surrounding frame information includes an abscissa value of a surrounding frame;
a coordinate value determination submodule for determining a maximum abscissa value and a minimum abscissa value of the abscissa values of the bounding box;
and the side edge position determining submodule is used for determining the position of the side edge of the vehicle in the vehicle image according to the direction of the target vehicle and the maximum abscissa value and the minimum abscissa value.
Optionally, the vehicle image is a vehicle image without a surrounding frame; the enclosure frame information acquisition submodule includes:
and an enclosure information acquisition unit configured to acquire enclosure information of the vehicle image output by the prediction model.
Optionally, the vehicle image is a cropped vehicle image within an enclosure; the vehicle surrounding frame information is boundary position information of the vehicle image.
Optionally, the plurality of preset vehicle directions include forward, backward, left, right, left forward, right forward, left backward, right backward; the left front direction, the right front direction, the left back direction and the right back direction belong to a preset vehicle direction with a visible middle edge; the forward direction, the backward direction, the left direction and the right direction belong to the preset vehicle direction with invisible middle arrises.
Optionally, the method further comprises:
training the predictive model by:
a training content acquisition module for acquiring vehicle image samples and a prediction model, the prediction model comprising a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible;
a second output value obtaining module, configured to obtain output values of a plurality of output items output by the prediction model for the vehicle image sample;
the first loss value calculation module is used for calculating a first loss value according to corresponding output values for the output items respectively representing the preset vehicle directions;
the second loss value calculation module is used for calculating a second loss value according to the corresponding output value for the output item which represents the position of the middle edge;
and the training module is used for training the prediction model according to the first loss value and the second loss value.
Optionally, the first loss value calculation module includes:
a first loss value calculation operator module for calculating a first loss value using a preset first loss function and corresponding output values for a plurality of output items respectively representing a plurality of preset vehicle directions.
Optionally, the vehicle image sample has an annotated vehicle direction; the second loss value calculation module includes:
a second loss value calculation operator module for calculating, for the output item representing the position of the middle edge, a second loss value using a preset second loss function and the corresponding output value if the marked vehicle direction belongs to a preset vehicle direction visible corresponding to the middle edge;
a second loss value determination submodule for determining, for the output item representing the position of the intermediate edge, a second loss value of 0 if the noted vehicle direction belongs to a preset vehicle direction corresponding to the intermediate edge being invisible.
The embodiment of the invention also discloses an electronic device, which comprises: a processor, a memory and a computer program stored on the memory and capable of running on the processor, which computer program, when being executed by the processor, carries out the steps of the vehicle lateral edge determination method as described above.
An embodiment of the present invention further discloses a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the vehicle side edge determining method as described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the vehicle image and the pre-trained prediction model can be obtained; wherein the predictive model comprises a plurality of output terms; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible; output values of a plurality of output items output to the vehicle image by the prediction model can be acquired; whether the middle edge of the vehicle is visible or not can be judged according to an output item which represents the position of the middle edge; then, the target vehicle direction is predicted in the case where the intermediate edge is visible or invisible, based on a plurality of output items respectively representing a plurality of preset vehicle directions, and the position of the vehicle side edge is determined based on the target vehicle direction. In the embodiment of the invention, the prediction model does not use an output item for indicating whether four side edges exist, so that the unreasonable situation that four side edges of the vehicle are predicted to be visible at the same time in the conventional prediction scheme can be avoided.
Drawings
FIG. 1 is a flow chart of steps in a method for vehicle side edge determination according to an embodiment of the present invention;
FIG. 2 is a schematic representation of an image of a vehicle in an embodiment of the present invention;
FIG. 3 is a schematic illustration of a preset vehicle direction in an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps in a method for determining a vehicle side edge according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating steps of a predictive model training method according to an embodiment of the present invention;
fig. 6 is a block diagram of a vehicle side edge determining apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
In one side edge prediction scheme, a prediction model may be used to predict side edges in a vehicle image. The prediction model comprises four output items which respectively represent whether four side edges exist or not and four output items which respectively represent position information of the four side edges.
Problems with this prediction scheme include: the predicted positions of the left and right side edges do not coincide with the surrounding frame of the vehicle. In the case of an edge, there are cases that do not exist in the real world, such as, for example, in the case of a camera angle that sees at most three edges of the vehicle, four output items that indicate the presence or absence of a side edge all indicate the presence of a side edge.
Therefore, the embodiment of the invention provides a vehicle side edge determining method, which can prevent the predicted left and right side edges from generating unreasonable prediction results and can ensure that the predicted left and right side edges are matched with the surrounding frame.
Referring to fig. 1, a flowchart illustrating steps of a method for determining a vehicle side edge according to an embodiment of the present invention is shown, where the method may specifically include the following steps:
step 101, obtaining a vehicle image and a pre-trained prediction model; the predictive model includes a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible.
In the embodiment of the present invention, the vehicle image refers to an image including a vehicle. In the vehicle image, the vehicle direction may be a vehicle orientation that can be observed. Fig. 2 is a schematic diagram of an image of a vehicle according to an embodiment of the present invention, wherein the vehicle may be oriented in a right-forward direction.
Assuming that the vehicles on the road are all rectangular parallelepipeds in shape, in the vehicle image, the vehicle may be modeled as a rectangular parallelepiped, and then the smallest rectangular parallelepiped that can surround the vehicle (without a rear view mirror) is selected as a surrounding frame in a three-dimensional space. The four vertical edges of the rectangular parallelepiped can be considered as the four side edges of the vehicle. The orientation of the side edges in the vehicle can be classified into four types of orientations: left posterior edge, left anterior edge, right posterior edge, right anterior edge.
The visible conditions of the side edges of the vehicle can be divided into two types, the first type is that only two side edges can be observed in the vehicle image, and the second type is that three side edges can be observed in the vehicle image. If only two side edges are observed in the vehicle image, it can be considered that no middle edge exists in the vehicle image. If three side edges are visible in the vehicle image, the middle edge in the middle of the three side edges may be considered as a middle edge.
The side edges may be predicted by a prediction model, and a tensor associated with the side edges in the prediction module may include a plurality of output items, each of which may be represented by one bit. A plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle may be included. In the embodiment of the invention, the prediction model does not use an output item for indicating whether four side edges exist, so that the unreasonable situation that the four side edges of the vehicle are predicted to be visible at the same time in the conventional prediction scheme can not occur.
In an embodiment of the invention, a plurality of preset vehicle directions may be provided, which may be divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible. In particular, in some predetermined vehicle directions, the vehicle may be observed with two lateral edges, and thus not visible corresponding to the medial edge. In other predetermined vehicle directions, the vehicle may be observed with three lateral edges, thus visible corresponding to the central edge.
In an embodiment of the present invention, the plurality of preset vehicle directions include a forward direction, a backward direction, a left direction, a right direction, a left forward direction, a right forward direction, a left backward direction, and a right backward direction. If the vehicle direction is left-forward, right-forward, left-backward, right-backward, then three side edges can be observed in the vehicle image, so that the left-forward, right-forward, left-backward, right-backward belong to the preset vehicle direction in which the middle edge is visible. If the vehicle direction is forward, backward, left, or right, two side edges can be observed in the vehicle image, and thus the forward, backward, left, or right direction belongs to the preset vehicle direction in which the middle edge is not visible. Referring to fig. 3, a schematic diagram of the preset vehicle direction in the embodiment of the invention is shown. Wherein a total of 8 bits of 0-7 may be used to represent 8 output terms for predicting vehicle direction, respectively.
And 102, acquiring output values of a plurality of output items output by the prediction model to the vehicle image.
In the embodiment of the present invention, the plurality of output items respectively representing the plurality of preset vehicle directions may be represented using binary bits. The output term representing the position of the middle edge of the vehicle may be represented using a floating point number bit.
And 103, judging whether the middle edge is visible or not according to the output value of the output item representing the position of the vehicle.
In the embodiment of the present invention, the output value of the output item indicating the position of the intermediate edge of the vehicle may indicate the position of the intermediate edge, or may indicate whether or not the intermediate edge is present. If the output value does not have a value, this indicates that there is no intervening edge, and if the output value has a value, this value indicates the position of the intervening edge.
And 104, if the middle edge is determined to be visible, determining the target vehicle direction according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is visible.
The magnitude of the output value of the output item corresponding to the preset vehicle direction in which the middle edge is visible may be compared, and the preset vehicle direction corresponding to the output item having the largest output value and being greater than the preset threshold may be determined as the target vehicle direction.
In one example, normalization processing may be performed using a normalization exponential function for all of the output values of the plurality of output items respectively representing the plurality of preset vehicle directions. For example, normalization using a softmax function, the individual output values can be converted to a probability distribution at 0,1 and summing to 1. Assuming that there are 8 output items of preset vehicle directions, the probability obtained after normalization processing on the output values is [0.9,0.1,0,0,0,0,0,0], and the first output value is the largest, and the vehicle direction of the first output item is visible corresponding to the middle edge, so that the vehicle direction of the first output item is determined as the target vehicle direction.
And 105, if the intermediate edge is determined to be invisible, determining the target vehicle direction according to the output value of the output item corresponding to the preset vehicle direction in which the intermediate edge is invisible.
The magnitude of the output value of the output item according to the preset vehicle direction in which the middle edge is not visible may be compared, and the preset vehicle direction corresponding to the output item having the maximum output value and greater than the preset threshold may be determined as the target vehicle direction.
And 106, determining the position of the vehicle side edge according to the target vehicle direction.
In the embodiment of the invention, the vehicle image and the pre-trained prediction model can be obtained; wherein the predictive model comprises a plurality of output terms; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible; output values of a plurality of output items output by the prediction model to the vehicle image may be acquired; whether the middle edge of the vehicle is visible or not can be judged according to an output item which represents the position of the middle edge; then, the target vehicle direction is predicted in the case where the intermediate edge is visible or invisible, based on a plurality of output items respectively representing a plurality of preset vehicle directions, and the position of the vehicle side edge is determined based on the target vehicle direction. In the embodiment of the invention, the prediction model does not use an output item for indicating whether four side edges exist, so that the unreasonable situation that the four side edges of the vehicle are predicted to be visible at the same time in the conventional prediction scheme can not occur.
Referring to fig. 4, a flowchart illustrating steps of a method for determining a vehicle side edge according to an embodiment of the present invention is shown, where the method may specifically include the following steps:
step 401, obtaining a vehicle image and a pre-trained prediction model; the predictive model includes a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible.
Step 402, obtaining output values of a plurality of output items output by the prediction model to the vehicle image.
And step 403, judging whether the middle edge is visible or not according to the output value of the output item representing the position of the vehicle.
In this embodiment of the present invention, the step 403 may include: determining that the middle edge is not visible if the output value of the output item representing the position of the vehicle does not have a numerical value; if the output value of the output item representing the position of the vehicle has a numerical value, the center edge is determined to be visible.
And step 404, if the middle edge is determined to be visible, determining the target vehicle direction according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is visible.
Step 405, if it is determined that the intermediate edge is not visible, determining the target vehicle direction according to the output value of the output item corresponding to the preset vehicle direction in which the intermediate edge is not visible.
Step 406, obtaining vehicle surrounding frame information of the vehicle image; the vehicle surrounding frame information includes an abscissa value of the surrounding frame.
The vehicle enclosure frame may refer to a frame determined after modeling a rectangular parallelepiped of a vehicle in the vehicle image.
In one embodiment of the present invention, the vehicle image is a vehicle image without a surrounding frame; the step 406 may comprise: and acquiring surrounding frame information of the vehicle image output by the prediction model.
In this embodiment, the predictive model may be trained using images of the vehicle without the bounding box, and the side edges may be trained together as an attribute of the bounding box. The vehicle image can be an unprocessed image acquired by the camera.
In another embodiment of the present invention, the vehicle image may be a vehicle image within a bounding box that is cropped; the vehicle surrounding frame information is boundary position information of the vehicle image.
In this embodiment, the vehicle image may be a cropped image, for example, an enclosure frame may be generated for the original image captured by the camera, and the vehicle image within the enclosure frame may be cropped. The prediction model can be obtained by training by using the vehicle image obtained by cutting.
Step 407, determining the maximum and minimum abscissa values of the bounding box.
And step 408, determining the positions of the left side edge and the right side edge of the vehicle in the vehicle image according to the direction of the target vehicle and the maximum abscissa value and the minimum abscissa value.
The left side edge of the vehicle may include a left front edge, a left rear edge; the right side edge may include a right front edge, a right rear edge. It is possible to determine which side edges are displayed in the vehicle image according to the target vehicle direction, and then determine the positions of the displayed side edges. For example, if the target vehicle direction is forward, a left rear edge and a right rear edge are displayed in the vehicle image, the abscissa of the left rear edge may be the minimum abscissa value of the bounding box, and the right rear edge may be the minimum abscissa value of the bounding box.
If the target vehicle direction is left, a left front edge and a left rear edge are displayed in the vehicle image, the abscissa of the left front edge may be the minimum abscissa value of the bounding box, and the abscissa of the left rear edge may be the maximum abscissa value of the bounding box.
If the target vehicle direction is right front, the right rear edge, left rear edge, right front edge are displayed in the vehicle image. The position of the right rear edge can be the position of the middle edge, and the position of the right rear edge can be obtained according to an output value of an output item representing the position of the middle edge of the vehicle; the position of the left rear edge may be a minimum abscissa value of the bounding box, and the position of the right front edge may be a maximum abscissa value of the bounding box.
In the embodiment of the invention, the vehicle image and the pre-trained prediction model can be obtained; wherein the predictive model includes a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible; output values of a plurality of output items output by the prediction model to the vehicle image may be acquired; whether the middle edge of the vehicle is visible or not can be judged according to an output item which represents the position of the middle edge; then predicting the direction of the target vehicle under the condition that the middle edge is visible or invisible according to a plurality of output items respectively representing a plurality of preset vehicle directions; acquiring vehicle surrounding frame information of the vehicle image, wherein the vehicle surrounding frame information comprises an abscissa value of a surrounding frame; determining a maximum abscissa value and a minimum abscissa value of the abscissa values of the bounding box; and determining the position of the side edge of the vehicle in the vehicle image according to the direction of the target vehicle and the maximum and minimum abscissa values. In the embodiment of the invention, the prediction model does not use an output item for indicating whether four side edges exist, so that the unreasonable situation that the four side edges of the vehicle are predicted to be visible at the same time in the conventional prediction scheme can not occur. Furthermore, the position of the side edge in the vehicle image can be directly determined from the vehicle surrounding frame information, and the position of the side edge is ensured to be matched with the vehicle surrounding frame.
Referring to fig. 5, a flowchart illustrating steps of a predictive model training method according to an embodiment of the present invention is shown, where the method specifically includes the following steps:
step 501, obtaining a vehicle image sample and a prediction model, wherein the prediction model comprises a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible.
The vehicle image samples may be pre-processed vehicle images used to train the predictive model.
Step 502, obtaining output values of a plurality of output items output by the prediction model to the vehicle image sample.
Step 503, calculating a first loss value according to the corresponding output values for the plurality of output items respectively representing the plurality of preset vehicle directions.
The loss value may be calculated using different calculation methods for the plurality of output items respectively representing the plurality of preset vehicle directions and for the output item representing the position of the intermediate edge.
In this embodiment of the present invention, the step 503 may include: for a plurality of output items respectively representing a plurality of preset vehicle directions, a first loss value is calculated using a preset first loss function and the corresponding output value.
In the embodiment of the present invention, the plurality of output items respectively representing the plurality of preset vehicle directions may be represented using binary bits. For binary represented output values, a first loss value may be calculated using a cross entropy function.
And step 504, calculating a second loss value according to the corresponding output value for the output item representing the position of the middle edge.
In an embodiment of the invention, the vehicle image sample has an annotated vehicle direction; the marked vehicle direction is the vehicle direction that is manually marked by the marking person.
The step 504 may comprise the sub-steps of:
and a substep S11, for the output item representing the position of the middle edge, calculating a second loss value using a preset second loss function and the corresponding output value if the noted vehicle direction belongs to a preset vehicle direction visible corresponding to the middle edge.
The output term representing the position of the middle edge of the vehicle may be represented using a floating point number bit, and if the marked vehicle direction belongs to a preset vehicle direction visible corresponding to the middle edge, the L2 penalty function may be used to calculate a penalty value for the middle edge position.
A substep S12, for the output item representing the position of the intermediate edge, determining a second loss value to be 0 if the marked vehicle direction belongs to a preset vehicle direction corresponding to the intermediate edge not being visible.
If the marked vehicle direction belongs to a predetermined vehicle direction, which corresponds to the position of the center edge being invisible, it is meaningless to calculate the loss value at the position of the center edge, so that it can be determined that the second loss value is 0.
Step 505, training the prediction model according to the first loss value and the second loss value.
A total loss value may be calculated from the first loss value and the second loss value, and the total loss value may be transmitted back to the predictive model to update parameters of the predictive model. In one example, the first loss value may be multiplied by a corresponding weight to be added to the second loss value multiplied by the corresponding weight, and the added value may be used as the total loss value.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a block diagram of a vehicle side edge determining apparatus according to an embodiment of the present invention is shown, and may specifically include the following modules:
a prediction content obtaining module 601, configured to obtain a vehicle image and a pre-trained prediction model; the predictive model includes a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible;
a first output value obtaining module 602, configured to obtain output values of a plurality of output items output by the prediction model for the vehicle image;
a middle edge judgment module 603, configured to judge whether a middle edge is visible according to the output value of the output item indicating the position of the vehicle;
a first vehicle direction determining module 604, configured to determine a target vehicle direction according to an output value of an output item corresponding to a preset vehicle direction in which a middle edge is visible if it is determined that the middle edge is visible;
a second vehicle direction determining module 605, configured to determine, if the middle edge is determined to be invisible, a target vehicle direction according to an output value of an output item corresponding to a preset vehicle direction in which the middle edge is invisible;
and a side edge position determining module 606, configured to determine a position of a vehicle side edge according to the target vehicle direction.
In an embodiment of the present invention, the middle edge determining module 603 may include:
an invisibility determination sub-module for determining that the middle edge is invisible if the output value of the output item representing the position of the vehicle does not have a numerical value;
a visibility determination submodule for determining that the center edge is visible if the output value of the output item indicating the position of the vehicle has a numerical value.
In an embodiment of the present invention, the side edge position determining module 606 may include:
the surrounding frame information acquisition submodule is used for acquiring vehicle surrounding frame information of the vehicle image; the vehicle surrounding frame information includes an abscissa value of a surrounding frame;
a coordinate value determination submodule for determining a maximum abscissa value and a minimum abscissa value of the abscissa values of the bounding box;
and the side edge position determining submodule is used for determining the position of the side edge of the vehicle in the vehicle image according to the direction of the target vehicle and the maximum abscissa value and the minimum abscissa value.
In the embodiment of the present invention, the vehicle image is a vehicle image without a surrounding frame; the enclosure information obtaining sub-module may include:
and an enclosure information acquisition unit configured to acquire enclosure information of the vehicle image output by the prediction model.
In the embodiment of the invention, the vehicle image is a clipped vehicle image in the surrounding frame; the vehicle surrounding frame information is boundary position information of the vehicle image.
In an embodiment of the present invention, the plurality of preset vehicle directions include a forward direction, a backward direction, a left direction, a right direction, a left forward direction, a right forward direction, a left backward direction, and a right backward direction; the left front direction, the right front direction, the left back direction and the right back direction belong to a preset vehicle direction with a visible middle edge; the forward direction, the backward direction, the left direction and the right direction belong to the preset vehicle direction with invisible middle arrises.
In an embodiment of the present invention, the prediction model is trained by:
a training content acquisition module for acquiring vehicle image samples and a prediction model, the prediction model comprising a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible;
a second output value obtaining module, configured to obtain output values of a plurality of output items output by the prediction model for the vehicle image sample;
the first loss value calculation module is used for calculating a first loss value according to corresponding output values for the output items respectively representing the preset vehicle directions;
the second loss value calculation module is used for calculating a second loss value according to the corresponding output value for the output item which represents the position of the middle edge;
and the training module is used for training the prediction model according to the first loss value and the second loss value.
In an embodiment of the present invention, the first loss value calculating module includes:
a first loss value calculation operator module for calculating a first loss value using a preset first loss function and corresponding output values for a plurality of output items respectively representing a plurality of preset vehicle directions.
In an embodiment of the invention, the vehicle image sample has an annotated vehicle direction; the second loss value calculation module includes:
a second loss value calculation operator module for calculating, for the output item representing the position of the middle edge, a second loss value using a preset second loss function and the corresponding output value if the marked vehicle direction belongs to a preset vehicle direction visible corresponding to the middle edge;
a second loss value determination submodule for determining, for the output item representing the position of the intermediate edge, a second loss value to be 0 if the marked vehicle direction belongs to a preset vehicle direction corresponding to the intermediate edge which is not visible.
In the embodiment of the invention, the vehicle image and the pre-trained prediction model can be obtained; wherein the predictive model comprises a plurality of output terms; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible; output values of a plurality of output items output by the prediction model to the vehicle image may be acquired; whether the middle edge of the vehicle is visible or not can be judged according to an output item which represents the position of the middle edge; then, the target vehicle direction is predicted in the case where the intermediate edge is visible or invisible, based on a plurality of output items respectively representing a plurality of preset vehicle directions, and the position of the vehicle side edge is determined based on the target vehicle direction. In the embodiment of the invention, the prediction model does not use an output item for indicating whether four side edges exist, so that the unreasonable situation that the four side edges of the vehicle are predicted to be visible at the same time in the conventional prediction scheme can not occur.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an electronic device, including:
the vehicle side edge determining method comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein when the computer program is executed by the processor, each process of the vehicle side edge determining method embodiment is realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the vehicle side edge determining method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The present invention provides a vehicle side edge determining method and a vehicle side edge determining device, which are described in detail above, and the principle and the implementation of the present invention are explained herein by applying specific examples, and the description of the above examples is only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A vehicle side edge determining method, characterized by comprising:
obtaining a vehicle image and a pre-trained prediction model; the predictive model includes a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible; the intermediate edge is a side edge at an intermediate position among side edges of the vehicle observed in the vehicle image;
acquiring output values of a plurality of output items output by the prediction model to the vehicle image;
judging whether the middle edge is visible or not according to the output value of the output item representing the position of the middle edge of the vehicle;
if the middle edge is determined to be visible, determining the direction of the target vehicle according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is visible;
if the intermediate edge is determined to be invisible, determining the direction of the target vehicle according to the output value of the output item corresponding to the preset vehicle direction in which the intermediate edge is invisible;
determining the position of a vehicle side edge according to the target vehicle direction; the position of the side edge of the vehicle in the vehicle image is determined from the vehicle surrounding frame information of the vehicle image.
2. The method of claim 1, wherein determining whether a middle edge of the vehicle is visible based on the output value of the output item indicating the position of the middle edge comprises:
determining that the intermediate edge is not visible if the output value of the output item representing the position of the intermediate edge of the vehicle does not have a numerical value;
if the output value of the output item representing the position of the intermediate edge of the vehicle has a numerical value, the intermediate edge is determined to be visible.
3. The method of claim 1, wherein determining the location of a vehicle side edge based on the target vehicle direction comprises:
acquiring vehicle surrounding frame information of a vehicle image; the vehicle surrounding frame information includes an abscissa value of a surrounding frame;
determining a maximum abscissa value and a minimum abscissa value of the abscissa values of the bounding box;
and determining the position of the side edge of the vehicle in the vehicle image according to the direction of the target vehicle and the maximum and minimum abscissa values.
4. The method of claim 3, wherein the vehicle image is a vehicle image without a bounding box; the vehicle surrounding frame information for acquiring the vehicle image comprises:
and acquiring surrounding frame information of the vehicle image output by the prediction model.
5. The method according to claim 3, wherein the vehicle bounding box information is boundary position information of the vehicle image.
6. The method of claim 1, wherein the plurality of preset vehicle directions comprise forward, backward, left, right, left forward, right forward, left backward, right backward; the left front direction, the right front direction, the left back direction and the right back direction belong to a preset vehicle direction with a visible middle edge; the forward direction, the backward direction, the left direction and the right direction belong to the preset vehicle direction with invisible middle arrises.
7. The method of claim 1, further comprising:
training the predictive model by:
obtaining a vehicle image sample and a predictive model, the predictive model comprising a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible;
obtaining output values of a plurality of output items output by the prediction model to the vehicle image sample;
calculating a first loss value according to corresponding output values for the plurality of output items respectively representing a plurality of preset vehicle directions;
for the output item representing the position of the middle edge of the vehicle, calculating a second loss value according to the corresponding output value;
and training the prediction model according to the first loss value and the second loss value.
8. The method of claim 7, wherein calculating a first loss value from corresponding output values for a plurality of output items respectively representing a plurality of preset vehicle directions comprises:
for a plurality of output items respectively representing a plurality of preset vehicle directions, a first loss value is calculated using a preset first loss function and the corresponding output value.
9. The method of claim 7, wherein the vehicle image sample has an annotated vehicle direction; the calculating, for the output item representing the position of the intermediate edge of the vehicle, a second loss value from the corresponding output value includes:
for the output item representing the position of the middle edge of the vehicle, if the marked vehicle direction belongs to a preset vehicle direction visible corresponding to the middle edge, calculating a second loss value by using a preset second loss function and the corresponding output value;
for the output item representing the position of the intermediate edge of the vehicle, a second loss value of 0 is determined if the marked vehicle direction belongs to a preset vehicle direction corresponding to the intermediate edge being invisible.
10. A vehicle side edge determining apparatus, comprising:
the prediction content acquisition module is used for acquiring a vehicle image and a pre-trained prediction model; the predictive model includes a plurality of output items; the plurality of output items include a plurality of output items respectively representing a plurality of preset vehicle directions, and an output item representing a position of a middle edge of the vehicle; the plurality of preset vehicle directions are divided into a preset vehicle direction corresponding to the middle edge being visible and a preset vehicle direction corresponding to the middle edge being invisible; the intermediate edge is a side edge at an intermediate position among side edges of the vehicle observed in the vehicle image;
a first output value acquisition module configured to acquire output values of a plurality of output items output to the vehicle image by the prediction model;
the middle edge judging module is used for judging whether the middle edge is visible or not according to the output value of the output item which represents the position of the middle edge of the vehicle;
the first vehicle direction determining module is used for determining the direction of the target vehicle according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is visible if the middle edge is determined to be visible;
the second vehicle direction determining module is used for determining the target vehicle direction according to the output value of the output item corresponding to the preset vehicle direction in which the middle edge is invisible if the middle edge is determined to be invisible;
the side edge position determining module is used for determining the position of a vehicle side edge according to the target vehicle direction; the position of the side edge of the vehicle in the vehicle image is determined from the vehicle surrounding frame information of the vehicle image.
11. An electronic device, comprising: processor, memory and computer program stored on the memory and executable on the processor, which computer program, when being executed by the processor, carries out the steps of the vehicle lateral edge determination method according to any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the vehicle side edge determination method according to any one of claims 1 to 9.
CN202110005040.5A 2021-01-04 2021-01-04 Vehicle side edge determining method and device Active CN112926378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110005040.5A CN112926378B (en) 2021-01-04 2021-01-04 Vehicle side edge determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110005040.5A CN112926378B (en) 2021-01-04 2021-01-04 Vehicle side edge determining method and device

Publications (2)

Publication Number Publication Date
CN112926378A CN112926378A (en) 2021-06-08
CN112926378B true CN112926378B (en) 2022-06-28

Family

ID=76163285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110005040.5A Active CN112926378B (en) 2021-01-04 2021-01-04 Vehicle side edge determining method and device

Country Status (1)

Country Link
CN (1) CN112926378B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960015A (en) * 2017-05-24 2018-12-07 优信拍(北京)信息科技有限公司 A kind of vehicle system automatic identifying method and device based on deep learning
CN111968071A (en) * 2020-06-29 2020-11-20 北京百度网讯科技有限公司 Method, device, equipment and storage medium for generating spatial position of vehicle

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10576974B2 (en) * 2015-06-29 2020-03-03 The Regents Of The University Of California Multiple-parts based vehicle detection integrated with lane detection for improved computational efficiency and robustness
CN109145928B (en) * 2017-06-16 2020-10-27 杭州海康威视数字技术股份有限公司 Method and device for identifying vehicle head orientation based on image
CN110929774B (en) * 2019-11-18 2023-11-14 腾讯科技(深圳)有限公司 Classification method, model training method and device for target objects in image
CN111081033B (en) * 2019-11-21 2021-06-01 北京百度网讯科技有限公司 Method and device for determining orientation angle of vehicle
CN112016532B (en) * 2020-10-22 2021-02-05 腾讯科技(深圳)有限公司 Vehicle detection method and device
CN112036389B (en) * 2020-11-09 2021-02-02 天津天瞳威势电子科技有限公司 Vehicle three-dimensional information detection method, device and equipment and readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960015A (en) * 2017-05-24 2018-12-07 优信拍(北京)信息科技有限公司 A kind of vehicle system automatic identifying method and device based on deep learning
CN111968071A (en) * 2020-06-29 2020-11-20 北京百度网讯科技有限公司 Method, device, equipment and storage medium for generating spatial position of vehicle

Also Published As

Publication number Publication date
CN112926378A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN108961327B (en) Monocular depth estimation method and device, equipment and storage medium thereof
CN111161349B (en) Object posture estimation method, device and equipment
US8687001B2 (en) Apparatus and method extracting light and texture, and rendering apparatus using light and texture
CN110889464B (en) Neural network training method for detecting target object, and target object detection method and device
CN111627001B (en) Image detection method and device
CN109118532B (en) Visual field depth estimation method, device, equipment and storage medium
CN108124489B (en) Information processing method, apparatus, cloud processing device and computer program product
CN113011364B (en) Neural network training, target object detection and driving control method and device
CN115019273A (en) Target detection method and device, automobile and storage medium
CN115620022A (en) Object detection method, device, equipment and storage medium
CN115909268A (en) Dynamic obstacle detection method and device
CN112926378B (en) Vehicle side edge determining method and device
CN112784705A (en) Vehicle side edge determining method and device
CN114842287B (en) Monocular three-dimensional target detection model training method and device of depth-guided deformer
CN116229448A (en) Three-dimensional target detection method, device, equipment and readable storage medium
CN112693450B (en) Analysis method, analysis device, electronic equipment and analysis medium for optimizing automatic parking
CN112270695B (en) Method, device, equipment and storage medium for determining camera motion state
CN112364693B (en) Binocular vision-based obstacle recognition method, device, equipment and storage medium
CN116883770A (en) Training method and device of depth estimation model, electronic equipment and storage medium
CN114359859A (en) Method and device for processing target object with shielding and storage medium
EP2657907A1 (en) Image processing apparatus, image display apparatus, and image processing method
JP7372488B2 (en) Apparatus and method for modifying ground truth to examine the accuracy of machine learning models
CN115063594B (en) Feature extraction method and device based on automatic driving
CN115965944B (en) Target information detection method, device, driving device and medium
CN117173692B (en) 3D target detection method, electronic device, medium and driving device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240305

Address after: 510000 No.8 Songgang street, Cencun, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Room 46, room 406, No. 1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou, Guangdong 510725

Patentee before: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right