CN110246183B - Wheel grounding point detection method, device and storage medium - Google Patents

Wheel grounding point detection method, device and storage medium Download PDF

Info

Publication number
CN110246183B
CN110246183B CN201910552199.1A CN201910552199A CN110246183B CN 110246183 B CN110246183 B CN 110246183B CN 201910552199 A CN201910552199 A CN 201910552199A CN 110246183 B CN110246183 B CN 110246183B
Authority
CN
China
Prior art keywords
wheel
feature map
obstacle vehicle
point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910552199.1A
Other languages
Chinese (zh)
Other versions
CN110246183A (en
Inventor
邓逸安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201910552199.1A priority Critical patent/CN110246183B/en
Publication of CN110246183A publication Critical patent/CN110246183A/en
Application granted granted Critical
Publication of CN110246183B publication Critical patent/CN110246183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a wheel grounding point detection method, a wheel grounding point detection device and a computer readable storage medium. The wheel grounding point detection method comprises the following steps: identifying the surrounding outline of at least one obstacle vehicle from the image to be detected; and inputting the image in the outer surrounding outline of the obstacle vehicle into a neural network model, and determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected by utilizing the output information of the neural network model. The position of the wheel grounding point of the obstacle vehicle in the image to be detected, which is determined by the embodiment of the invention, can more accurately and truly reflect the spatial position of the target vehicle, and the requirement on the image of the target vehicle is lower, and only the image comprises at least one wheel. In addition, the embodiment of the invention is suitable for all vehicles and most scene conditions, including vehicles with truncated vehicles and vehicles with overlong length, and has good robustness.

Description

Wheel grounding point detection method, device and storage medium
Technical Field
The present invention relates to the field of information technology, and in particular, to a wheel grounding point detection method, device and computer-readable storage medium.
Background
In an automatic driving scene, when the vehicle travels in the current lane, the position relation between the vehicle and other vehicles needs to be judged, and an appropriate control decision is made according to the position relation so as to prevent collision accidents. For example, when it is detected that there is a vehicle in a short distance in front of the lane or that there are vehicles in short distances on both sides cutting into the lane, the vehicle needs to perform a deceleration operation in time. The accurate distance and position information of other vehicles relative to the vehicle is obtained, and the accurate position information of other vehicles needs to be detected in the obtained image to be detected, and the distance between the vehicles is estimated on the basis.
For the problem of vehicle position detection, the following schemes are currently adopted to solve the problem:
(1) and (4) visual vehicle detection. And detecting a rectangular frame containing the obstacle vehicle from the image by using the artificial neural network model, and taking the coordinates of the middle point in the bottom edge of the rectangular frame as the position coordinates of the obstacle vehicle. Or taking the coordinates of the lower edge corner points of the tail frame of the obstacle vehicle as the position coordinates of the obstacle vehicle. The obstacle vehicle tail frame is a rectangular frame comprising an obstacle vehicle tail. After the position of the obstacle vehicle is detected, the distance between the host vehicle and the obstacle vehicle can be further estimated according to the position of the vehicle. For example, in a visual vehicle detection scheme, the horizontal distance between the host vehicle and the obstacle vehicle may be estimated from the abscissa of the midpoint of the bottom side of the rectangular frame.
(2) And detecting the radar distance. And transmitting radar signals to the periphery through radar equipment, and obtaining a plurality of radar reflection signals from the obstacle vehicles, wherein the signals comprise longitudinal and horizontal distance information of the object. The distance between the host vehicle and the target vehicle is finally calculated through processing procedures such as fusion and selection of the reflection signals, and the position of the target vehicle is determined.
The above scheme has the following defects:
(1) the defects of visual vehicle detection mainly comprise: (a) the position of the middle point of the bottom edge is directly influenced by the position estimation of the two end points of the bottom edge, and the position estimation of the two end points of the bottom edge is inaccurate, so that the error of the detection result is large. (b) For close-distance vehicles on left and right adjacent lanes, when the close-distance vehicles are influenced by factors such as image truncation and vehicle inclination, the horizontal position calculated by the detected rectangular frame has larger deviation with the real vehicle position. As shown in fig. 1, the white point on the bottom side of the rectangular frame is the vehicle position point calculated based on the conventional method, and it can be seen that the error of the detection result is large. (c) When the vehicle tail is not inside the image, the position of the vehicle cannot be calculated using the tail box.
(2) The defects of radar distance detection mainly comprise: (a) the distance information contained in the radar points fed back is influenced by the performance of the radar equipment, the material of the obstacle and the reflection angle, and the error is large. (b) One radar point falls on different positions of the same vehicle, such as the side face and the back face, and the distance information fed back by the radar point is different and cannot reflect uniform distance information. (c) The radar point penetrates through the target vehicle with a certain probability and falls on other objects, and at the moment, the distance information of the radar point cannot truly reflect the distance and the position of the target vehicle. (d) Two radar points that fall on different vehicles may be misjudged as belonging to the same vehicle, which also affects the calculation results of the vehicle distance and position.
Disclosure of Invention
Embodiments of the present invention provide a wheel grounding point detection method, apparatus, content security firewall and computer readable storage medium, so as to solve one or more technical problems in the prior art.
In a first aspect, an embodiment of the present invention provides a wheel grounding point detection method, including:
identifying the surrounding outline of at least one obstacle vehicle from the image to be detected;
and inputting the image in the outer surrounding outline of the obstacle vehicle into a neural network model, and determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected by utilizing the output information of the neural network model.
In one embodiment, the output information of the neural network model includes: an outer circumferential profile of a wheel of the obstacle vehicle and a position coordinate of at least one wheel grounding point of the obstacle vehicle.
In one embodiment, the output information of the neural network model includes a first output feature map; predicting position coordinates of wheel contact points of one wheel of the obstacle vehicle using preset n1 first channels in the first output feature map, each of the first channels including m1 first feature map grid points, each of the first feature map grid points corresponding to one rectangular image region in the image to be detected, the output information corresponding to each of the first feature map grid points including the position coordinates of predicted k1 wheel contact points in the rectangular image region corresponding to the first feature map grid point;
wherein n1, m1, k1, n2, m2 and k2 are integers of 1 or more.
In one embodiment, the output information of the neural network model includes a second output feature map; predicting an outer surrounding contour of one wheel of the obstacle vehicle by using preset n2 second channels in the second output feature map, wherein each second channel comprises m2 second feature map points, each second feature map point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each second feature map point comprises the predicted outer surrounding contours of k2 wheels in the rectangular image area corresponding to the second feature map point;
wherein n1, m1, k1, n2, m2 and k2 are integers of 1 or more.
In one embodiment, the output information corresponding to the first feature grid point further includes a probability value corresponding to each first feature grid point, where the probability value is a probability value of including a wheel grounding point in a rectangular image region corresponding to the first feature grid point;
determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected by utilizing the output information of the neural network model, and the method comprises the following steps:
performing data fusion on the output information corresponding to the first feature graph lattice point with the probability value larger than a preset probability threshold;
and determining the data fusion result as the position coordinates of the wheel grounding point of the obstacle vehicle in the image to be detected.
In one embodiment, after determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected by using the output information of the neural network model, the method further includes:
and estimating the distance between the obstacle vehicle and other vehicles according to the position of the wheel grounding point of the obstacle vehicle in the image to be detected.
In a second aspect, an embodiment of the present invention provides a wheel grounding point detecting apparatus, including:
the identification unit is used for identifying the surrounding outline of at least one obstacle vehicle from the image to be detected;
a determination unit configured to: and inputting the image in the outer surrounding contour of the obstacle vehicle into a neural network model, and determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected by utilizing the output information of the neural network model.
In one embodiment, the output information of the neural network model includes: an outer circumferential profile of a wheel of the obstacle vehicle and a position coordinate of at least one wheel grounding point of the obstacle vehicle.
In one embodiment, the output information of the neural network model includes a first output feature map; predicting position coordinates of wheel grounding points of one wheel of the obstacle vehicle by using preset n1 first channels in the first output feature map, wherein each first channel comprises m1 first feature map grid points, each first feature map grid point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each first feature map grid point comprises the predicted position coordinates of k1 wheel grounding points in the rectangular image area corresponding to the first feature map grid point;
wherein n1, m1, k1, n2, m2 and k2 are integers of 1 or more.
In one embodiment, the output information of the neural network model includes a second output feature map; predicting an outer surrounding contour of one wheel of the obstacle vehicle by using preset n2 second channels in the second output feature map, wherein each second channel comprises m2 second feature map points, each second feature map point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each second feature map point comprises the predicted outer surrounding contours of k2 wheels in the rectangular image area corresponding to the second feature map point;
wherein n1, m1, k1, n2, m2 and k2 are integers of 1 or more.
In one embodiment, the output information corresponding to the first feature map lattice points further includes a probability value corresponding to each first feature map lattice point, where the probability value is a probability value of including a wheel grounding point in a rectangular image region corresponding to the first feature map lattice point;
the determination unit is configured to:
performing data fusion on the output information corresponding to the first feature graph lattice point with the probability value larger than a preset probability threshold;
and determining the data fusion result as the position coordinate of the wheel grounding point of the obstacle vehicle in the image to be detected.
In one embodiment, the apparatus further comprises an estimation unit configured to:
and estimating the distance between the obstacle vehicle and other vehicles according to the position of the wheel grounding point of the obstacle vehicle in the image to be detected.
In a third aspect, an embodiment of the present invention provides a wheel grounding point detecting device, where functions of the device may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the apparatus has a structure including a processor and a memory, the memory is used for storing a program for supporting the apparatus to execute the wheel ground point detection method, and the processor is configured to execute the program stored in the memory. The apparatus may also include a communication interface for communicating with other devices or a communication network.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium for storing computer software instructions for a wheel grounding point detection apparatus, which includes a program for executing the wheel grounding point detection method.
The technical scheme has the following advantages or beneficial effects: the position of the wheel grounding point of the obstacle vehicle in the image to be detected, which is determined according to the scheme, can reflect the spatial position of the target vehicle more accurately and truly, and the requirement on the image of the target vehicle is lower, and only the image contains at least one wheel. In addition, the embodiment of the invention is suitable for all vehicles and most scene conditions, including vehicles with truncated vehicles and vehicles with overlong length, and has good robustness.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference characters designate like or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
FIG. 1 shows a schematic diagram of a prior art visual vehicle detection scheme.
Fig. 2 shows a flow chart of a wheel grounding point detection method according to an embodiment of the present invention.
Fig. 3 is a schematic view showing image information in a camera of an obstacle vehicle according to a wheel contact point detecting method of an embodiment of the present invention.
Fig. 4 is a schematic view illustrating an outer circumferential contour and a wheel grounding point marking of a wheel of an obstacle vehicle according to a wheel grounding point detecting method of an embodiment of the present invention.
Fig. 5 is a schematic view showing an outer peripheral contour and a wheel grounding point labeling of a wheel of a multi-wheel obstacle vehicle according to a wheel grounding point detecting method of an embodiment of the present invention.
Fig. 6 is a schematic network structure diagram of a neural network model of a wheel ground point detection method according to an embodiment of the present invention.
Fig. 7 is a schematic view showing a distribution of a target object in a model output characteristic diagram according to a wheel grounding point detecting method of an embodiment of the present invention.
Fig. 8 is a schematic structural view showing wheel contact point information output from a model of a wheel contact point detecting method according to an embodiment of the present invention.
Fig. 9 shows a flow chart of a wheel grounding point detection method according to an embodiment of the present invention.
Fig. 10 shows a schematic diagram of redundant detection of a wheel grounding point detection method according to an embodiment of the present invention.
Fig. 11 shows a flow chart of a wheel grounding point detection method according to an embodiment of the present invention.
Fig. 12 is a block diagram showing a structure of a wheel grounding point detecting apparatus according to an embodiment of the present invention.
Fig. 13 is a block diagram showing a structure of a wheel ground point detecting device according to an embodiment of the present invention.
Fig. 14 is a block diagram showing a structure of a wheel ground point detecting device according to an embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Fig. 2 shows a flow chart of a wheel grounding point detection method according to an embodiment of the present invention. As shown in fig. 2, the wheel ground point detecting method includes:
step S110, identifying the surrounding outline of at least one obstacle vehicle from the image to be detected;
and step S120, inputting the image in the outer surrounding contour of the obstacle vehicle into a neural network model, and determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected by utilizing the output information of the neural network model.
In an automatic driving scene, the position of an obstacle vehicle needs to be accurately detected. On the basis of accurately detecting the position of the obstacle vehicle, accurate distance information of the obstacle vehicle relative to the vehicle can be obtained. On the basis, a driving strategy can be further formulated according to the obtained distance information. For example, the horizontal distance of the obstacle vehicle relative to the host vehicle can be estimated on the basis of accurately detecting the position of the obstacle vehicle, and collision detection, lane change strategy formulation, and the like can be performed on the basis of the horizontal distance. Taking the lane change policy as an example, the detected position of the obstacle vehicle may be compared with the detected information of the lane line, so as to determine the semantic information related to the obstacle vehicle, such as whether the obstacle vehicle is in an adjacent lane, whether the obstacle vehicle is in the own lane, whether the obstacle vehicle presses the own lane line, and the like. Further, the host vehicle can select a policy based on the semantic information.
The embodiment of the invention takes the wheel grounding point as the characteristic of calculating the vehicle position and estimating the distance, and has the following advantages:
(1) the wheel is a component structure common to general vehicles, and a wheel image can be captured by a visual camera.
(2) The wheel grounding point belongs to a point on the ground, and after the wheel grounding point in the image is converted into a space coordinate from an image coordinate, the coordinate value of the wheel grounding point can truly reflect the position of a target object in a three-dimensional space. For example, in a scenario where the horizontal distance between vehicles is estimated, the coordinate values of the wheel grounding points can truly reflect the horizontal position of the target object in the three-dimensional space.
In step S110, for one input image, in order to detect the wheel contact points of the obstacle vehicle, it is necessary to identify the surrounding contour including the obstacle vehicle using the target detection algorithm. Specific embodiments of the target detection algorithm may include identifying a rectangular box containing the obstacle vehicle using a neural network model. In general, a single input image may include several obstacle cars. Through the process of step S110, the outer surrounding outline containing each obstacle vehicle can be identified. The outer surrounding contour of the obstacle vehicle may be a partial region of the entire image.
On the basis, step S120 is executed, and the wheel grounding points of the obstacle vehicles are detected from the local areas corresponding to the plurality of obstacle vehicles by using a specific vision detection algorithm. Embodiments of the vision detection algorithm may include, among other things, detecting wheel grounding points of the obstacle vehicle using a neural network model. The neural network model for implementing the visual detection algorithm in step S120 and the neural network model for implementing the target detection algorithm in step S110 may have different network structures. The "neural network model" described hereinafter refers to the neural network model that implements the visual inspection algorithm in step S120.
Compared with a method for searching for the grounding points of all wheels from the whole image, the method for detecting the grounding points of the wheels from the local area corresponding to the obstacle vehicle in the embodiment of the invention simplifies task complexity and has low requirement on the complexity of a neural network model, thereby ensuring the timeliness of the system and meeting the requirement of a vehicle-mounted system on algorithm time complexity.
Further, after the wheel contact point is successfully detected, the position of the target vehicle in the three-dimensional space can be determined by converting the image coordinates to the three-dimensional space coordinates through the coordinate conversion.
In step S120, it is first necessary to input the image within the outer surrounding outline of the obstacle vehicle into a neural network model. The image information schematic diagram of the obstacle vehicle in the camera is shown in fig. 3. In a general automatic driving scene, a camera of a vehicle can clearly capture an image of a wheel on one side of an obstacle vehicle under normal conditions.
In one example, only the outer surrounding contour and the wheel grounding point information of the wheel on the visible side need to be marked in the marking process. The outline of the wheel of the obstacle vehicle and the grounding point of the wheel are marked schematically and shown in fig. 4. Fig. 4 is the image information corresponding to fig. 3 after labeling, and the labeled information in fig. 4 includes the outer peripheral contour of the wheel and the wheel grounding point. The rectangular box containing the wheel in fig. 4 is labeled the outer perimeter of the wheel, and the white dots on the bottom side of the rectangular box are labeled the wheel grounding points. In the example of fig. 4, only the outer circumferential contour of the wheel on the visible side and the wheel grounding point information are labeled.
In one example, the outer surrounding outline of the wheel may be represented by a rectangular box, which may be represented by a quadruple (x1, y1, w, h). Wherein x1 and y1 respectively represent image position coordinates of a center point of the rectangular frame; h (side length in the vertical direction) and w (side length in the horizontal direction) represent the length and width of the rectangular frame, respectively. The image position coordinates of the wheel grounding points can be represented using a doublet (x2, y 2).
In one example, for a target vehicle having a plurality of wheels, only the frontmost and rearmost visible wheel information need to be labeled, and the peripheral outline and the wheel grounding point labeling diagram of the wheel of the multi-wheel obstacle vehicle are shown in fig. 5.
In one embodiment, the output information of the neural network model includes: an outer circumferential profile of a wheel of the obstacle vehicle and a position coordinate of at least one wheel grounding point of the obstacle vehicle.
In the embodiment of the invention, the input information of the neural network model is an image in an outer surrounding contour of the obstacle vehicle, and the output information is coordinate values of a wheel grounding point of the obstacle vehicle and position and size information of the outer surrounding contour of the wheel of the obstacle vehicle, such as a predicted value of the length and width of a rectangular frame including the wheel of the obstacle vehicle. In one example, a deep convolutional neural network model may be employed for training learning to obtain accurate output information.
In one embodiment, the output information of the neural network model includes a first output feature map; predicting position coordinates of wheel grounding points of one wheel of the obstacle vehicle by using preset n1 first channels in the first output feature map, wherein each first channel comprises m1 first feature map grid points, each first feature map grid point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each first feature map grid point comprises the predicted position coordinates of k1 wheel grounding points in the rectangular image area corresponding to the first feature map grid point;
wherein n1, m1, k1, n2, m2 and k2 are integers of 1 or more.
In one embodiment, the output information of the neural network model includes a second output feature map; predicting the outer surrounding contour of one wheel of the obstacle vehicle by using preset n2 second channels in the second output feature map, wherein each second channel comprises m2 second feature map grid points, each second feature map grid point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each second feature map grid point comprises the predicted outer surrounding contours of k2 wheels in the rectangular image area corresponding to the second feature map grid point;
wherein n1, m1, k1, n2, m2 and k2 are integers of 1 or more.
Fig. 6 is a schematic network structure diagram illustrating a neural network model of a wheel ground point detection method according to an embodiment of the present invention. As shown in fig. 6, compared with a fully-connected neural network, in the embodiment of the present invention, the neural network model is simplified, and a full convolution connection form is adopted, so that the amount of parameters included in the neural network model is less than that in a full-connected layer manner, and the frame rate of the model output can be improved.
Exemplarily, the number "224" in fig. 6 indicates that the pixel of the input image of the neural network model is 224 × 224. The numbers "3, 16, 32, 64, 128, 256, 512" in fig. 6 represent the number of channels of each layer of the neural network model. In the general case, "channel" may refer to a color channel of an image, and a channel in the neural network model may be an output result of a convolution filter. Both of the above are essentially the same, and both represent data that is obtained by calculating input information and obtaining a certain characteristic distribution in the input information. The number of output channels is the number of feature maps in the output information. One channel is used for detecting a certain characteristic, and the strength of a value in the channel is the reaction of the strength of the current characteristic. By convolving a range of feature maps, the pattern of multiple feature combinations can be extracted as a feature to obtain the next feature map. And continuing to convolute the feature graph, and continuously combining the features to obtain a more complex feature graph.
Furthermore, the output of the neural network model illustrated in fig. 6 contains two parts: the wheel outer circumferential profile prediction information and the wheel grounding point prediction information. Wherein the wheel grounding point prediction information part of the output information of the neural network model is represented by a first output characteristic diagram. In the example of fig. 6, n1, m1, k1, 2, i.e., 3 position coordinates for predicting a wheel contact point using channels, each channel including 7, 49 first signature grid points, each corresponding to position coordinates for detecting 2 wheel contact points. The outer peripheral contour prediction information portion of the wheel of the output information of the neural network model is represented by a second output feature map. In the example of fig. 6, n2 ═ 5, m2 ═ 49, k2 ═ 2, that is, 5 channels are used to predict the outer circumferential profile of a wheel, each channel includes 7 × 7 ═ 49 second feature map grid points, and each second feature map grid point corresponds to the outer circumferential profile of 2 wheels to be detected. Because the outer surrounding contour of the wheel and the grounding point of the wheel are visual features in one-to-one correspondence, the two features are jointly learned to help the parameter optimization of the model, improve the perception capability of the model on visual objects and improve the detection performance of the grounding point.
Fig. 7 is a schematic view showing a distribution of a target object in a model output characteristic diagram according to a wheel grounding point detecting method of an embodiment of the present invention. Referring to fig. 6 and 7, the output feature map of the neural network model includes 49 grid points of 7 × 7. That is, the intersection of the set of white horizontal lines and the set of white vertical lines in fig. 7 is a grid point. Each grid point corresponds to a rectangular image area in the image to be detected. In an output layer feature map of a traditional target detection model, only one target object can be detected by one grid point. As shown in fig. 7, two wheel contact points are included in the rectangular image area corresponding to the grid point of the feature map at the lower left corner, but since the grid point can only predict one wheel contact point in the conventional target detection model, the other wheel contact point cannot be detected.
In view of this, the embodiment of the present invention adopts a method of outputting a dual feature map. Fig. 8 is a schematic structural view showing wheel contact point information output from a model of a wheel contact point detecting method according to an embodiment of the present invention. Referring to fig. 6 to 8, channel number 6 of the entire characteristic diagram of the wheel grounding point prediction information part of the output information of the neural network model. And the 6 channels are divided into two parts. The first 3 channels are used to predict the grounding point of the front wheel, and the last 3 channels are used to predict the grounding point of the rear wheel. Reference numeral 8-1 in fig. 8 denotes a front wheel group, and reference numeral 8-2 denotes a rear wheel group. The term groudtruth is a literal meaning of ground truth value and ground truth, and is used for indicating correct labeling in machine learning. In the above embodiment, the same grid point position may output two prediction results correspondingly. Therefore, the model can still completely output the prediction results of the grounding points of the two wheels for the vehicle image with the too small distance between the front wheel and the rear wheel.
Referring to fig. 6, the outer peripheral contour prediction information portion of the wheel of the output information of the neural network model is represented by channel number 10 of the entire feature map. And the 10 channels are divided into two parts. The first 5 channels are used to predict the outer contour of the front wheel, and the last 5 channels are used to predict the outer contour of the rear wheel.
Fig. 9 shows a flow chart of a wheel grounding point detection method according to an embodiment of the present invention. As shown in fig. 9, in an embodiment, the output information corresponding to the first feature map lattice point further includes a probability value corresponding to each first feature map lattice point, where the probability value is a probability value of including a wheel grounding point in a rectangular image region corresponding to the first feature map lattice point;
in step S120 in fig. 2, determining a position of a wheel grounding point of the obstacle vehicle in the image to be detected by using the output information of the neural network model may specifically include:
step S210, carrying out data fusion on the output information corresponding to the first feature map lattice point with the probability value larger than a preset probability threshold;
step S220, determining the data fusion result as the position coordinate of the wheel grounding point of the obstacle vehicle in the image to be detected.
Since the input to the neural network model is a single vehicle image, there is only one prediction object responsible for predicting the feature map of a certain wheel contact point. Under the condition, in order to improve the detection accuracy of the wheel grounding point or the outer surrounding contour of the wheel, the embodiment of the invention adds a redundancy detection strategy.
Fig. 10 shows a schematic diagram of redundant detection of a wheel grounding point detection method according to an embodiment of the present invention. As described above, the output information of the neural network model is the channel digit 6 of the entire characteristic diagram of the wheel grounding point prediction information portion. And the 6 channels are divided into two parts. The first 3 channels are used to predict the grounding point of the front wheel, and the last 3 channels are used to predict the grounding point of the rear wheel. Figure 10 shows 3 channels to predict the grounding point of a wheel.
In the example shown in fig. 10, there are 4 rectangular image regions corresponding to the first feature map lattice points with probability values greater than the preset probability threshold, which are represented by the rectangular image regions where the black solid points are located in fig. 10. As described above, the first output characteristic map represents the wheel-contact-point prediction information portion of the output information of the neural network model. In fig. 10, the probability value that the 4 rectangular image regions include the wheel contact points is large, and the output information corresponding to the 4 first feature map lattice points is subjected to data fusion. The fusion formula is as follows:
Figure BDA0002104940890000111
where O denotes the position coordinates of the wheel contact points finally obtained after the redundant detection, PiProbability values, O, representing the corresponding outputs of the first feature map grid points iiThe position coordinates of the wheel contact point output from the first characteristic diagram grid point i are shown.
The white point among the 4 black solid points in fig. 10 represents the position coordinates of the wheel grounding point finally obtained after the calculation by the fusion formula. The redundancy detection strategy fuses the position coordinates of the wheel grounding points correspondingly output from the first characteristic diagram grid points i, so that the accuracy of wheel grounding point detection is improved.
In one example, a redundant detection strategy may also be incorporated in order to improve the detection accuracy of the wheel contact points or the outer envelope of the wheel when training the neural network model. In the model training stage, according to the labeling information, a neighborhood can be determined at the position of the target object in the feature map, the target object is predicted at the feature map lattice points in the neighborhood, and then all prediction results are fused by using a weighted average method. The fusion formula is the same as above, and is not described herein again.
Fig. 11 shows a flow chart of a wheel grounding point detecting method according to an embodiment of the present invention. As shown in fig. 11, in an embodiment, step S120 in fig. 2, after determining a position of a wheel grounding point of the obstacle vehicle in the image to be detected by using the output information of the neural network model, further includes:
and step S130, estimating the distance between the obstacle vehicle and other vehicles according to the position of the wheel grounding point of the obstacle vehicle in the image to be detected.
On the basis of accurately detecting the position of the wheel grounding point of the obstacle vehicle in the image to be detected, the distance between the vehicles can be estimated, and a driving strategy can be further formulated according to the obtained distance information. For example, lane change decision and collision detection are performed according to accurate positioning information of the obstacle vehicles. The accuracy of the positioning information of the obstacle vehicle is the premise and guarantee of correct decision.
The technical scheme has the following advantages or beneficial effects: the position of the wheel grounding point of the obstacle vehicle in the image to be detected, which is determined according to the scheme, can reflect the spatial position of the target vehicle more accurately and truly, and the requirement on the image of the target vehicle is lower, and only the image contains at least one wheel. In addition, the embodiment of the invention is suitable for all vehicles and most scene conditions including vehicle truncation, vehicles with overlong length and the like, and has good robustness. Compared with the radar distance detection scheme, the position of the wheel grounding point of the obstacle vehicle in the image to be detected, which is determined by the scheme, is not influenced by factors such as vehicle materials, orientation and the like, and the anti-interference capability is good.
Fig. 12 is a block diagram showing a structure of a wheel ground point detecting device according to an embodiment of the present invention. As shown in fig. 12, the wheel grounding point detecting apparatus of the embodiment of the present invention includes:
the recognition unit 100 is used for recognizing the surrounding outline of at least one obstacle vehicle from the image to be detected;
a determining unit 200 for: and inputting the image in the outer surrounding contour of the obstacle vehicle into a neural network model, and determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected by utilizing the output information of the neural network model.
In one embodiment, the output information of the neural network model includes: an outer circumferential profile of a wheel of the obstacle vehicle and a position coordinate of at least one wheel grounding point of the obstacle vehicle.
In one embodiment, the output information of the neural network model includes a first output feature map; predicting position coordinates of wheel grounding points of one wheel of the obstacle vehicle by using preset n1 first channels in the first output feature map, wherein each first channel comprises m1 first feature map grid points, each first feature map grid point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each first feature map grid point comprises the predicted position coordinates of k1 wheel grounding points in the rectangular image area corresponding to the first feature map grid point;
wherein n1, m1, k1, n2, m2 and k2 are integers of 1 or more.
In one embodiment, the output information of the neural network model includes a second output feature map; predicting an outer surrounding contour of one wheel of the obstacle vehicle by using preset n2 second channels in the second output feature map, wherein each second channel comprises m2 second feature map points, each second feature map point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each second feature map point comprises the predicted outer surrounding contours of k2 wheels in the rectangular image area corresponding to the second feature map point;
wherein n1, m1, k1, n2, m2 and k2 are integers of 1 or more.
In one embodiment, the output information corresponding to the first feature grid point further includes a probability value corresponding to each first feature grid point, where the probability value is a probability value of including a wheel grounding point in a rectangular image region corresponding to the first feature grid point;
the determining unit 200 is configured to:
performing data fusion on the output information corresponding to the first feature map lattice point with the probability value larger than a preset probability threshold;
and determining the data fusion result as the position coordinate of the wheel grounding point of the obstacle vehicle in the image to be detected.
Fig. 13 is a block diagram showing a structure of a wheel ground point detecting device according to an embodiment of the present invention. As shown in fig. 13, in an embodiment, the apparatus further comprises an estimating unit 300, the estimating unit 300 being configured to:
and estimating the distance between the obstacle vehicle and other vehicles according to the position of the wheel grounding point of the obstacle vehicle in the image to be detected.
The functions of each unit in the wheel grounding point detection device according to the embodiment of the present invention can be referred to the corresponding description in the above method, and are not described herein again.
Fig. 14 is a block diagram showing a structure of a wheel ground point detecting device according to an embodiment of the present invention. As shown in fig. 14, the apparatus includes: a memory 910 and a processor 920, the memory 910 having stored therein computer programs operable on the processor 920. The processor 920 implements the wheel ground point detecting method in the above-described embodiment when executing the computer program. The number of the memory 910 and the processor 920 may be one or more.
The device also includes:
the communication interface 930 is used for communicating with an external device to perform data interactive transmission.
The memory 910 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 910, the processor 920 and the communication interface 930 are implemented independently, the memory 910, the processor 920 and the communication interface 930 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 14, but this is not intended to represent only one bus or type of bus.
Optionally, in an implementation, if the memory 910, the processor 920 and the communication interface 930 are integrated on a chip, the memory 910, the processor 920 and the communication interface 930 may complete communication with each other through an internal interface.
An embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program is used for implementing the method of any one of the above embodiments when being executed by a processor.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present invention, and these should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (14)

1. A wheel grounding point detecting method, characterized by comprising:
identifying an outer surrounding contour of at least one obstacle vehicle from an image to be detected by using a neural network model of a target detection algorithm, wherein the outer surrounding contour is a rectangular frame containing the obstacle vehicle;
inputting an image in an outer surrounding contour of the obstacle vehicle into a neural network model based on a visual detection algorithm, and determining the position of a wheel grounding point of the obstacle vehicle in the image to be detected by utilizing output information of the neural network model based on the visual detection algorithm, wherein the output information of the neural network model based on the visual detection algorithm comprises position coordinates of at least one predicted wheel grounding point of the obstacle vehicle;
wherein the output information of the neural network model of the visual inspection algorithm comprises a first output feature map; and predicting the position coordinates of the wheel grounding point of one wheel of the obstacle vehicle by using preset n1 first channels in the first output characteristic diagram, wherein n1 is an integer greater than or equal to 1.
2. The method of claim 1, wherein the output information of the neural network model of the visual inspection algorithm further comprises: an outer circumferential profile of a wheel of the obstacle vehicle.
3. The method of claim 1,
each of the first channels includes m1 first feature map lattice points, each of the first feature map lattice points corresponds to a rectangular image area in the image to be detected, and the output information corresponding to each of the first feature map lattice points includes position coordinates of k1 predicted wheel grounding points in the rectangular image area corresponding to the first feature map lattice point;
wherein m1 and k1 are integers of 1 or more.
4. The method of claim 2,
the output information of the neural network model of the visual inspection algorithm comprises a second output feature map; predicting an outer surrounding contour of one wheel of the obstacle vehicle by using preset n2 second channels in the second output feature map, wherein each second channel comprises m2 second feature map points, each second feature map point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each second feature map point comprises the predicted outer surrounding contours of k2 wheels in the rectangular image area corresponding to the second feature map point;
wherein n2, m2 and k2 are integers of 1 or more.
5. The method according to claim 3, wherein the output information corresponding to the first feature grid point further comprises a probability value corresponding to each of the first feature grid points, the probability value being a probability value of including a wheel grounding point in a rectangular image area corresponding to the first feature grid point;
determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected by utilizing the output information of the neural network model of the visual detection algorithm, wherein the method comprises the following steps:
performing data fusion on the output information corresponding to the first feature graph lattice point with the probability value larger than a preset probability threshold;
and determining the data fusion result as the position coordinates of the wheel grounding point of the obstacle vehicle in the image to be detected.
6. The method according to any one of claims 1 to 5, wherein after determining the position of the wheel grounding point of the obstacle vehicle in the image to be detected using the output information of the neural network model of the vision detection algorithm, further comprising:
and estimating the distance between the obstacle vehicle and other vehicles according to the position of the wheel grounding point of the obstacle vehicle in the image to be detected.
7. A wheel grounding point detecting device, comprising:
the identification unit is used for identifying the outer surrounding outline of at least one obstacle vehicle from the image to be detected by utilizing a neural network model of a target detection algorithm, wherein the outer surrounding outline is a rectangular frame containing the obstacle vehicle;
a determination unit configured to: inputting an image in an outer surrounding contour of the obstacle vehicle into a neural network model based on a visual detection algorithm, and determining the position of a wheel grounding point of the obstacle vehicle in the image to be detected by utilizing output information of the neural network model based on the visual detection algorithm, wherein the output information of the neural network model based on the visual detection algorithm comprises position coordinates of at least one predicted wheel grounding point of the obstacle vehicle;
wherein the output information of the neural network model of the visual inspection algorithm comprises a first output feature map; predicting position coordinates of a wheel contact point of one wheel of the obstacle vehicle using n1 preset first channels in the first output characteristic map, wherein n1 is an integer greater than or equal to 1.
8. The apparatus of claim 7, wherein the output information of the neural network model of the visual inspection algorithm further comprises: an outer circumferential profile of a wheel of the obstacle vehicle.
9. The apparatus of claim 7,
each of the first channels includes m1 first feature map lattice points, each of the first feature map lattice points corresponds to a rectangular image area in the image to be detected, and the output information corresponding to each of the first feature map lattice points includes position coordinates of k1 predicted wheel grounding points in the rectangular image area corresponding to the first feature map lattice point;
wherein m1 and k1 are integers of 1 or more.
10. The apparatus of claim 8,
the output information of the neural network model of the visual inspection algorithm comprises a second output feature map; predicting the outer surrounding contour of one wheel of the obstacle vehicle by using preset n2 second channels in the second output feature map, wherein each second channel comprises m2 second feature map grid points, each second feature map grid point corresponds to one rectangular image area in the image to be detected, and the output information corresponding to each second feature map grid point comprises the predicted outer surrounding contours of k2 wheels in the rectangular image area corresponding to the second feature map grid point;
wherein n2, m2 and k2 are integers of 1 or more.
11. The apparatus according to claim 9, wherein the output information corresponding to the first feature grid point further includes a probability value corresponding to each of the first feature grid points, the probability value being a probability value including a wheel ground point in a rectangular image area corresponding to the first feature grid point;
the determination unit is configured to:
performing data fusion on the output information corresponding to the first feature map lattice point with the probability value larger than a preset probability threshold;
and determining the data fusion result as the position coordinate of the wheel grounding point of the obstacle vehicle in the image to be detected.
12. The apparatus according to any of claims 7 to 11, further comprising an estimation unit configured to:
and estimating the distance between the obstacle vehicle and other vehicles according to the position of the wheel grounding point of the obstacle vehicle in the image to be detected.
13. A wheel grounding point detecting device, characterized by comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
CN201910552199.1A 2019-06-24 2019-06-24 Wheel grounding point detection method, device and storage medium Active CN110246183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910552199.1A CN110246183B (en) 2019-06-24 2019-06-24 Wheel grounding point detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910552199.1A CN110246183B (en) 2019-06-24 2019-06-24 Wheel grounding point detection method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110246183A CN110246183A (en) 2019-09-17
CN110246183B true CN110246183B (en) 2022-07-15

Family

ID=67889191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910552199.1A Active CN110246183B (en) 2019-06-24 2019-06-24 Wheel grounding point detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110246183B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738164B (en) * 2019-10-12 2022-08-12 北京猎户星空科技有限公司 Part abnormity detection method, model training method and device
CN110969655B (en) * 2019-10-24 2023-08-18 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and vehicle for detecting parking space
CN110991232B (en) * 2019-10-28 2024-02-13 纵目科技(上海)股份有限公司 Vehicle position correction method and system, storage medium and terminal
CN110853366B (en) * 2019-11-20 2021-04-16 浙江大华技术股份有限公司 Method and device for detecting parking position of vehicle
JP7380824B2 (en) * 2020-02-20 2023-11-15 日本電信電話株式会社 Vehicle state estimation method, vehicle state estimation device, and vehicle state estimation program
CN111402326B (en) * 2020-03-13 2023-08-25 北京百度网讯科技有限公司 Obstacle detection method, obstacle detection device, unmanned vehicle and storage medium
CN112861683A (en) * 2021-01-29 2021-05-28 上海商汤临港智能科技有限公司 Driving direction detection method and device, computer equipment and storage medium
CN112883909B (en) * 2021-03-16 2024-06-14 东软睿驰汽车技术(沈阳)有限公司 Obstacle position detection method and device based on bounding box and electronic equipment
CN114863388A (en) * 2022-04-02 2022-08-05 合众新能源汽车有限公司 Method, device, system, equipment, medium and product for determining obstacle orientation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025642A (en) * 2016-01-27 2017-08-08 百度在线网络技术(北京)有限公司 Vehicle's contour detection method and device based on cloud data
CN109827516A (en) * 2019-03-19 2019-05-31 魔视智能科技(上海)有限公司 A method of distance is measured by wheel
CN109886312A (en) * 2019-01-28 2019-06-14 同济大学 A kind of bridge wheel of vehicle detection method based on multilayer feature fused neural network model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6833630B2 (en) * 2017-06-22 2021-02-24 株式会社東芝 Object detector, object detection method and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025642A (en) * 2016-01-27 2017-08-08 百度在线网络技术(北京)有限公司 Vehicle's contour detection method and device based on cloud data
CN109886312A (en) * 2019-01-28 2019-06-14 同济大学 A kind of bridge wheel of vehicle detection method based on multilayer feature fused neural network model
CN109827516A (en) * 2019-03-19 2019-05-31 魔视智能科技(上海)有限公司 A method of distance is measured by wheel

Also Published As

Publication number Publication date
CN110246183A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN110246183B (en) Wheel grounding point detection method, device and storage medium
US10832064B2 (en) Vacant parking space detection apparatus and vacant parking space detection method
CN109703569B (en) Information processing method, device and storage medium
CN113936198B (en) Low-beam laser radar and camera fusion method, storage medium and device
CN111369617B (en) 3D target detection method of monocular view based on convolutional neural network
CN110659548B (en) Vehicle and target detection method and device thereof
CN108830131B (en) Deep learning-based traffic target detection and ranging method
US12012102B2 (en) Method for determining a lane change indication of a vehicle
CN114118252A (en) Vehicle detection method and detection device based on sensor multivariate information fusion
CN111295666A (en) Lane line detection method, device, control equipment and storage medium
CN111316328A (en) Method for maintaining lane line map, electronic device and storage medium
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN114119749A (en) Monocular 3D vehicle detection method based on dense association
CN117496515A (en) Point cloud data labeling method, storage medium and electronic equipment
CN112396043A (en) Vehicle environment information perception method and device, electronic equipment and storage medium
CN116721396A (en) Lane line detection method, device and storage medium
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
CN115376025B (en) Unmanned aerial vehicle target detection method, unmanned aerial vehicle target detection system, unmanned aerial vehicle target detection equipment and storage medium
CN115661577B (en) Method, apparatus and computer readable storage medium for object detection
CN116152761B (en) Lane line detection method and device
US20240078749A1 (en) Method and apparatus for modeling object, storage medium, and vehicle control method
CN117593719A (en) Parking space labeling method, model training method, parking method and related devices
CN115320590A (en) Method for correcting position of following vehicle and related equipment
CN118230593A (en) Parking space detection method, electronic equipment, storage medium and vehicle
CN116883973A (en) Point cloud target detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant