CN112800873A - Method, device and system for determining target direction angle and storage medium - Google Patents

Method, device and system for determining target direction angle and storage medium Download PDF

Info

Publication number
CN112800873A
CN112800873A CN202110047651.6A CN202110047651A CN112800873A CN 112800873 A CN112800873 A CN 112800873A CN 202110047651 A CN202110047651 A CN 202110047651A CN 112800873 A CN112800873 A CN 112800873A
Authority
CN
China
Prior art keywords
width
length
target
target frame
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110047651.6A
Other languages
Chinese (zh)
Inventor
王泽荔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imotion Automotive Technology Suzhou Co Ltd
Original Assignee
Imotion Automotive Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imotion Automotive Technology Suzhou Co Ltd filed Critical Imotion Automotive Technology Suzhou Co Ltd
Priority to CN202110047651.6A priority Critical patent/CN112800873A/en
Publication of CN112800873A publication Critical patent/CN112800873A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, a device, a system and a storage medium for determining a target direction angle, which belong to the technical field of image processing, wherein the method comprises the following steps: acquiring point cloud data of a target obstacle in training data; taking the length and the width of the target frame and the length and the width of the minimum circumscribed rectangle of the target frame as label data to train a deep neural network model; acquiring target obstacle point cloud data, and predicting the length and width of a target and the length and width of a minimum external rectangle by adopting the deep neural network model obtained by training; and calculating a target direction angle by using the length and the width of the target and the length and the width of the minimum bounding rectangle so as to assist the unmanned vehicle in path planning. The method can solve the problem that in the existing unmanned driving path planning, for the determination of the direction angle of the target around the vehicle, the target length and width and the length and width of the minimum circumscribed rectangle are predicted by using a deep learning model to carry out angle calculation. The angle prediction accuracy can be improved, and the problem of angle jitter is alleviated.

Description

Method, device and system for determining target direction angle and storage medium
Technical Field
The application relates to a method, a device, a system and a storage medium for determining a target direction angle, and belongs to the technical field of unmanned driving.
Background
An unmanned Vehicle system (AGV) is an intelligent control system that obtains environmental information, Vehicle state and position according to various sensors and automatically controls the driving behavior of a Vehicle by understanding the environment. The local path planning is one of key technologies for research of unmanned vehicles, and the local path planning refers to the following steps: in an uncertain road environment of the unmanned automobile, the control system plans the current running path of the automobile in real time according to information provided by the environment perception system and the vehicle state detection system, the target to be achieved provided by the global path planning, and the like.
In the process of path planning, the direction angle of an obstacle in front of an unmanned automobile needs to be predicted, an angle model is learned by adopting a deep learning network for calculating the direction angle at present, and the angle model is used for predicting the angle of a surrounding target to obtain the direction angle of the surrounding target.
The deep neural network can directly predict angles according to a large amount of sample data and a network structure with higher generalization, and the generalization of the deep neural network for target detection to coordinate points and angle value prediction formed by less constraints is worse than that to a target frame, so that when the deep neural network is used for slightly and directly predicting angles of surrounding targets, the predicted angles are very unstable, the jitter is serious, and the method cannot be effectively used.
Disclosure of Invention
The application provides a method, a device and a system for determining a direction angle and a storage medium, which can solve the problems that in the existing unmanned path planning, for determining a target direction angle, an angle prediction is directly carried out by adopting a deep learning method, the prediction result is unstable, the variation range is large, and the jitter is serious.
The application provides the following technical scheme:
in a first aspect of the embodiments of the present application, a method for determining a target direction angle is provided, where the method includes:
acquiring point cloud data of the detected target obstacle in real time;
predicting a target frame of the point cloud data acquired in real time by adopting a pre-trained deep learning model to obtain the length and width of the target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame;
and determining a target direction angle corresponding to the target obstacle according to the length and the width of the target frame and the length and the width of the minimum circumscribed rectangle corresponding to the target frame so as to assist the unmanned vehicle in path planning.
Further, according to the method for determining a target direction angle according to the first aspect of the embodiments of the present application, the step of training the deep learning model includes:
acquiring a training sample set, wherein the training sample set comprises point cloud data and labels, and the labels indicate the vertex coordinates of a target frame and the label types of all laser points in the point cloud data;
inputting the training sample set into a deep learning model for learning to obtain the length and width of the target frame and the length and width prediction result of the corresponding minimum circumscribed rectangle; in the learning process, determining the length and width of a target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame according to the vertex coordinates of the target frame, and taking the length and width of the target frame and the length and width of the minimum circumscribed rectangle as training labels;
determining a difference between the prediction result and the training label according to a loss function;
iteratively training the deep learning model based on the difference;
and when the difference reaches a preset range, finishing the training of the deep learning model.
Further, according to the method for determining a target direction angle in the first aspect of the embodiment of the present application, the method for determining a target direction angle corresponding to a target obstacle includes:
Figure BDA0002897869050000021
where α is a target direction angle, a is a length of the minimum bounding rectangle, b is a width of the minimum bounding rectangle, w is a width of the target, and l is a length of the target.
In a second aspect of the embodiments of the present application, there is provided an apparatus for determining a target direction angle, the apparatus including:
a data acquisition module configured to acquire point cloud data of a target obstacle in real time;
the deep learning module is configured to perform target frame prediction on the point cloud data acquired in real time by adopting a pre-trained deep learning model to obtain the length and width of the target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame;
and the calculation module is configured to determine a target direction angle corresponding to the target obstacle according to the length and the width of the target frame and the length and the width of the minimum circumscribed rectangle corresponding to the target frame, so as to assist the unmanned vehicle in path planning.
Further, according to the apparatus of the second aspect of the embodiment of the present application, the deep learning module is further configured to train a deep learning model, including:
acquiring a training sample set, wherein the training sample set comprises point cloud data, a target frame, a minimum circumscribed rectangle corresponding to the target frame and a corresponding label, and the label indicates a vertex coordinate of the target frame and a label category of each laser point in the point cloud data;
inputting the training sample set into a deep learning model for learning to obtain the length and width of the target frame and the length and width prediction result of the corresponding minimum circumscribed rectangle; in the learning process, determining the length and width of a target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame according to the vertex coordinates of the target frame, and taking the length and width of the target frame and the length and width of the minimum circumscribed rectangle as training labels;
determining a difference between the prediction result and the training label according to a loss function;
iteratively training the deep learning model based on the difference;
and when the difference reaches a preset range, finishing the training of the deep learning model to obtain the deep learning model.
Further, according to the apparatus in the second aspect of the embodiment of the present application, the method for determining the target direction angle corresponding to the target obstacle by the calculation module is as follows:
Figure BDA0002897869050000031
wherein α is a target direction angle, a is a length of the minimum bounding rectangle, b is a width of the minimum bounding rectangle, w is a width of the target frame, and l is a length of the target frame.
In a third aspect of the embodiments of the present application, a system for determining a target direction angle is provided, where the system includes a processor and a memory, where the memory stores a computer program, and the computer program is loaded and executed by the processor to implement the steps of the method for determining a target direction angle according to the first aspect of the embodiments of the present application.
In a fourth aspect of the embodiments of the present application, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when being executed by a processor, is configured to implement the steps of the method for determining a target direction angle according to the first aspect of the embodiments of the present application.
The beneficial effect that this application reached: the target frame in the target obstacle point cloud data, the length and width data of the target frame and the length and width data of the minimum external rectangle are predicted by adopting a trained deep learning model, and therefore the target direction angle is determined. The point cloud data in the training sample set used in the method is data which truly reflects the shape of the target, so that the length and the width of the minimum circumscribed rectangle have high reliability, and in the deep learning process, the length and the width data are used as labels, so that the problems that in the prior art, the prediction result is unstable, the variation range is large and the jitter is serious due to the fact that the difference is extracted through the neural network features can be solved.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clear and clear, and to implement the technical solutions according to the content of the description, the following detailed description is made with reference to the preferred embodiments of the present application and the accompanying drawings.
Drawings
FIG. 1 is a diagram of a path planning architecture provided in one embodiment of the present application;
FIG. 2 is a flow chart of a method for determining a heading angle provided by an embodiment of the present application;
FIG. 3 is a flow diagram for training a deep learning model provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a relationship between a length and a width of a target frame and a length and a width of a minimum bounding rectangle according to an embodiment of the present application;
FIG. 5 is a block diagram of an apparatus for determining a direction angle provided in one embodiment of the present application;
fig. 6 is a block diagram of a system for determining a direction angle according to an embodiment of the present application.
Detailed Description
The following detailed description of embodiments of the present application will be described in conjunction with the accompanying drawings and examples. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
Fig. 1 is a schematic diagram of a path planning system architecture, which is provided in an embodiment of the present application and capable of implementing the method for determining a direction angle and the device for determining a direction angle in the embodiment of the present application, and as shown in fig. 1, the path planning system architecture includes a laser radar 101 and a path planning control device 102, and the laser radar 101 and the path planning control device 102 establish a communication connection.
The laser radar 101 emits a laser beam around the unmanned vehicle, and if the laser beam meets an obstacle, the laser beam is reflected by the obstacle, and point cloud data of the obstacle is obtained.
The path planning control device 102 firstly obtains point cloud data of the laser radar 101, and then inputs the point cloud data into a pre-trained deep learning model to predict a target frame, so as to obtain the length and width of the target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame. And finally, determining a target direction angle corresponding to the target obstacle according to the length and the width of the target frame and the length and the width of the minimum circumscribed rectangle corresponding to the target frame so as to assist the unmanned vehicle in path planning.
And the path planning control device determines the direction of the target obstacle according to the target direction angle and controls the traveling path of the unmanned vehicle.
It should be noted that the method for determining the direction angle provided in the embodiment of the present application is executed by the path planning control device 102, and accordingly, the device for determining the direction angle is disposed in the path planning control device 102.
Fig. 2 is a flowchart of a method for determining a target direction angle according to an embodiment of the present application, where the embodiment takes the method applied to the path planning system shown in fig. 1, and an execution subject of each step is an example of a path planning control device in the system. The method at least comprises the following steps:
s201, acquiring point cloud data of the detected target obstacle in real time;
specifically, in this embodiment, the laser radar emits a plurality of laser beams to scan an obstacle in front of the vehicle, and if the obstacle is encountered, the laser beams are reflected by the obstacle, and a set of points reflected by the surface of the obstacle is a point cloud, where each point in the point cloud includes a three-dimensional coordinate (a position relative to the laser radar) and a laser reflection intensity of the point. Therefore, the position and distance of the obstacle can be detected according to the acquired point cloud data.
S202, adopting a pre-trained deep learning model to predict the point cloud data acquired in real time to obtain the length and width of the target frame and the length and width of the minimum circumscribed rectangle corresponding to the target frame.
The target frame indicates the target position of the target obstacle, and the length and the width of the target frame and the length and the width of the corresponding minimum circumscribed rectangle can be predicted by using a trained deep learning model according to the point cloud data acquired in real time.
The minimum bounding rectangle may also be referred to as a minimum bounding rectangle, and refers to a maximum range of two-dimensional shapes (e.g., points, lines, polygons) represented by two-dimensional coordinates, that is, a rectangle whose boundary is defined by a maximum abscissa, a minimum abscissa, a maximum ordinate, and a minimum ordinate among vertices of a given two-dimensional shape.
Fig. 3 is a flowchart of training a linear regression model, and as shown in fig. 3, the step of training the linear regression model according to the embodiment of the present application includes:
s301, a training sample set is obtained.
The training sample set comprises point cloud data of a target obstacle and labels, wherein the labels indicate the vertex coordinates of a target frame and the label category of each laser point in the point cloud data;
the target frame of this embodiment may be a 3D target frame, the label may be 8 vertex coordinates corresponding to the target frame, the length and the width of the target frame may be determined according to the vertex coordinates of the target frame, and the minimum bounding rectangle may be a minimum bounding rectangle of a bottom surface or a top surface corresponding to the 3D target frame.
According to the coordinates of 4 vertexes of the top surface or the bottom surface, the length and the width of the corresponding minimum circumscribed rectangle can be obtained. The relationship between the minimum bounding rectangle and the target frame in the present application can be seen in fig. 4.
And S302, inputting the training sample set into a built deep learning model for learning to obtain a prediction result.
The prediction result of the embodiment of the application comprises the length and the width of the target frame corresponding to the point cloud data and the length and the width of the minimum circumscribed rectangle. For a deep learning model, according to an input training sample set, a target frame corresponding to a target obstacle and a corresponding minimum circumscribed rectangle can be obtained through target detection, in the learning process, according to the position relation between the minimum circumscribed rectangle and the target frame, vertex coordinates of the target frame indicated by a label are converted into the length and width of the target frame and the length and width of the corresponding minimum circumscribed rectangle through calculation, and the length and width of the target frame obtained according to the label and the length and width of the corresponding minimum circumscribed rectangle are used as final training labels. The final output of the deep learning model is the length and width of the target frame and the length and width of the corresponding minimum bounding rectangle.
Regarding the deep learning model, an existing neural network model may be adopted, for example, the neural network model may be an FCOS network, a CNN network, and the like, and the embodiments of the present application are not limited herein, and the principle and the process of target detection belong to technologies known to those skilled in the art, and are not described herein again.
And S303, determining the difference between the prediction result and the label according to a loss function.
And S304, performing iterative training on the deep learning model based on the difference.
And S305, finishing the training of the deep learning model when the difference reaches a preset range.
The specific training process of the deep learning model belongs to the technology well known in the art, and is not described in detail herein.
And S203, determining a target direction angle according to the predicted length and width of the target frame and the corresponding length and width of the minimum circumscribed rectangle to assist the unmanned vehicle in path planning.
Specifically, the direction of the target frame is also the direction of the target obstacle, and according to the position relationship and the corresponding size relationship between the target frame and the minimum circumscribed rectangle, the target direction angle can be known, so as to assist the path planning of the unmanned vehicle.
Fig. 4 is a schematic diagram showing a relationship between a length and a width of a minimum circumscribed rectangle of the target obstacle and a length and a width of the target obstacle, where the target direction angle obtained in this embodiment refers to a deviation angle of the target obstacle with respect to a positive direction of an x-axis in a world coordinate system.
As shown in fig. 4, it can be seen that:
a=l×sinα+w×cosα
b=w×sinα+l×cosα
solving a simultaneous equation according to the obtained equation to obtain;
Figure BDA0002897869050000071
wherein α is a target direction angle, a is a length of the minimum bounding rectangle, b is a width of the minimum bounding rectangle, w is a width of the target frame, and l is a length of the target frame.
In summary, the trained deep learning model is used for predicting the target frame in the target obstacle point cloud data, the length and width data of the target frame and the length and width data of the minimum circumscribed rectangle of the target frame, so that the target direction angle is determined. The point cloud data in the training sample set used in the method is data which truly reflects the shape of the target, so that the length and width of the obtained target frame and the length and width of the minimum circumscribed rectangle corresponding to the target frame have high reliability, and in the deep learning process, the length and width data are used as labels, so that the problems that in the prior art, the prediction result is unstable, the variation range is large and the jitter is serious due to the difference extracted through the neural network features can be solved.
Fig. 5 is a block diagram of an apparatus for determining a target direction angle according to an embodiment of the present application, and this embodiment takes as an example a path planning control apparatus applied to the path planning system shown in fig. 1. The device at least comprises the following modules:
a data acquisition module configured to acquire point cloud data of a target obstacle in real time;
the deep learning module is configured to perform target frame prediction on the point cloud data acquired in real time by adopting a pre-trained deep learning model to obtain the length and width of the target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame;
and the calculation module is configured to determine a target direction angle corresponding to the target obstacle according to the length and the width of the target frame and the length and the width of the minimum circumscribed rectangle corresponding to the target frame, so as to assist the unmanned vehicle in path planning.
Further, the deep learning module is further configured to train a deep learning model, including:
acquiring a training sample set, wherein the training sample set comprises point cloud data and corresponding labels, and the labels indicate vertex coordinates of a target frame;
inputting the training sample set into a deep learning model for learning to obtain the length and width of the target frame and the length and width prediction result of the corresponding minimum circumscribed rectangle; in the learning process, determining the length and width of a target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame according to the vertex coordinates of the target frame indicated by a label, and taking the length and width of the target frame and the length and width of the minimum circumscribed rectangle as training labels;
determining a difference between the prediction result and the training label according to a loss function;
iteratively training the deep learning model based on the difference;
and when the difference reaches a preset range, finishing the training of the deep learning model to obtain the deep learning model.
Further, the method for determining the target direction angle corresponding to the target obstacle by the second calculation module is as follows:
Figure BDA0002897869050000081
where α is the target direction angle, a is the length of the minimum bounding rectangle, b is the width of the minimum bounding rectangle, w is the width of the target obstacle, and l is the length of the target frame.
The present embodiment refers to the above method embodiments for the relevant details of the apparatus for determining a target direction angle.
It should be noted that: in the above embodiment, when the apparatus for determining a target direction angle determines a target direction angle, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be performed by different functional modules according to needs, that is, the internal structure of the apparatus for determining a target direction angle is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus for determining a target direction angle and the method for determining a target direction angle provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 6 is a block diagram of a system for determining a target direction angle, which may be a tablet computer, a notebook computer, a desktop computer, or a server according to an embodiment of the present application. A system for determining a target bearing angle includes at least a processor and a memory.
The processor may include one or more processing cores, such as: 4 core processors, 6 core processors, etc. The processor may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable gate array), PLA (Programmable logic array). The processor may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor may be integrated with a GPU (Graphics processing unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory may include one or more computer-readable storage media, which may be non-transitory. The memory may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in a memory is used to store at least one instruction for execution by a processor to implement a method of determining a target bearing angle provided by a method embodiment of the present application.
In some embodiments, the system for determining the target direction angle may further comprise: a peripheral interface and at least one peripheral. The processor, memory and peripheral interface may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface via a bus, signal line, or circuit board. Illustratively, peripheral devices include, but are not limited to: radio frequency circuit, touch display screen, audio circuit, power supply, etc.
Of course, the system for determining the target direction angle may also include fewer or more components, which is not limited by the embodiment.
Optionally, the present application further provides a computer-readable storage medium, in which a program is stored, the program being loaded and executed by a processor to implement the steps of the method for determining a target direction angle according to the above-mentioned method embodiment.
Optionally, the present application further provides a computer product, which includes a computer-readable storage medium, in which a program is stored, the program being loaded and executed by a processor to implement the steps of the method for determining a target direction angle of the above-mentioned method embodiments.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A method of determining a target bearing angle, the method comprising:
acquiring point cloud data of the detected target obstacle in real time;
predicting a target frame of the point cloud data acquired in real time by adopting a pre-trained deep learning model to obtain the length and width of the target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame;
and determining a target direction angle corresponding to the target obstacle according to the length and the width of the target frame and the length and the width of the minimum circumscribed rectangle corresponding to the target frame so as to assist the unmanned vehicle in path planning.
2. The method of claim 1, wherein the step of training the deep learning model comprises:
acquiring a training sample set, wherein the training sample set comprises point cloud data and corresponding labels, and the labels indicate vertex coordinates of a target frame corresponding to the point cloud data;
inputting the training sample set into a deep learning model for learning to obtain the length and width of the target frame and the length and width prediction result of the corresponding minimum circumscribed rectangle; in the learning process, determining the length and the width of a target frame and the length and the width of a minimum circumscribed rectangle corresponding to the target frame according to the vertex coordinates of the target frame, and taking the length and the width of the target frame and the length and the width of the minimum circumscribed rectangle corresponding to the target frame as training labels;
determining a difference between the prediction result and the training label according to a loss function;
iteratively training the deep learning model based on the difference;
and when the difference reaches a preset range, finishing the training of the deep learning model.
3. The method of claim 1, wherein the method for determining the target direction angle corresponding to the target obstacle comprises:
Figure FDA0002897869040000011
wherein α is a target direction angle, a is a length of the minimum bounding rectangle, b is a width of the minimum bounding rectangle, w is a width of the target frame, and l is a length of the target frame.
4. An apparatus for determining a target bearing angle, the apparatus comprising:
a data acquisition module configured to acquire point cloud data of a target obstacle in real time;
the deep learning module is configured to perform target frame prediction on the point cloud data acquired in real time by adopting a pre-trained deep learning model to obtain the length and width of the target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame;
and the angle calculation module is configured to determine a target direction angle corresponding to the target obstacle according to the length and the width of the target frame and the length and the width of the minimum circumscribed rectangle corresponding to the target frame, so as to assist the unmanned vehicle in path planning.
5. The apparatus of claim 4, wherein the deep learning module is further configured to train a deep learning model, comprising:
acquiring a training sample set, wherein the training sample set comprises point cloud data and labels, and the labels indicate the vertex coordinates of a target frame and the label types of all laser points in the point cloud data;
inputting the training sample set into a deep learning model for learning to obtain the length and width of the target frame and the length and width prediction result of the corresponding minimum circumscribed rectangle; in the learning process, determining the length and width of a target frame and the length and width of a minimum circumscribed rectangle corresponding to the target frame according to the vertex coordinates of the target frame, and taking the length and width of the target frame and the length and width of the minimum circumscribed rectangle as training labels;
determining a difference between the prediction result and the training label according to a loss function;
iteratively training the deep learning model based on the difference;
and when the difference reaches a preset range, finishing the training of the deep learning model.
6. The apparatus of claim 4, wherein the calculation module determines the target direction angle corresponding to the target obstacle by:
Figure FDA0002897869040000021
wherein α is a target direction angle, a is a length of the minimum bounding rectangle, b is a width of the minimum bounding rectangle, w is a width of the target frame, and l is a length of the target frame.
7. A system for determining a target direction angle, the system comprising a processor and a memory, the memory having stored therein a computer program, wherein the computer program is loaded and executed by the processor to implement the method of determining a target direction angle according to any one of claims 1 to 3.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the method of determining a target direction angle according to any one of claims 1 to 3.
CN202110047651.6A 2021-01-14 2021-01-14 Method, device and system for determining target direction angle and storage medium Pending CN112800873A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110047651.6A CN112800873A (en) 2021-01-14 2021-01-14 Method, device and system for determining target direction angle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110047651.6A CN112800873A (en) 2021-01-14 2021-01-14 Method, device and system for determining target direction angle and storage medium

Publications (1)

Publication Number Publication Date
CN112800873A true CN112800873A (en) 2021-05-14

Family

ID=75810770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110047651.6A Pending CN112800873A (en) 2021-01-14 2021-01-14 Method, device and system for determining target direction angle and storage medium

Country Status (1)

Country Link
CN (1) CN112800873A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115857502A (en) * 2022-11-30 2023-03-28 上海木蚁机器人科技有限公司 Travel control method and electronic device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901567A (en) * 2017-12-08 2019-06-18 百度在线网络技术(北京)有限公司 Method and apparatus for exporting obstacle information
CN110472553A (en) * 2019-08-12 2019-11-19 北京易航远智科技有限公司 Target tracking method, computing device and the medium of image and laser point cloud fusion
CN110688902A (en) * 2019-08-30 2020-01-14 智慧互通科技有限公司 Method and device for detecting vehicle area in parking space
CN111291786A (en) * 2020-01-17 2020-06-16 清华大学 Vehicle-mounted vision real-time multi-target course angle estimation method and device
WO2020151166A1 (en) * 2019-01-23 2020-07-30 平安科技(深圳)有限公司 Multi-target tracking method and device, computer device and readable storage medium
CN111507126A (en) * 2019-01-30 2020-08-07 杭州海康威视数字技术股份有限公司 Alarming method and device of driving assistance system and electronic equipment
WO2020186444A1 (en) * 2019-03-19 2020-09-24 深圳市大疆创新科技有限公司 Object detection method, electronic device, and computer storage medium
CN111723608A (en) * 2019-03-20 2020-09-29 杭州海康威视数字技术股份有限公司 Alarming method and device of driving assistance system and electronic equipment
CN111967360A (en) * 2020-08-06 2020-11-20 苏州易航远智智能科技有限公司 Target vehicle attitude detection method based on wheels

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901567A (en) * 2017-12-08 2019-06-18 百度在线网络技术(北京)有限公司 Method and apparatus for exporting obstacle information
WO2020151166A1 (en) * 2019-01-23 2020-07-30 平安科技(深圳)有限公司 Multi-target tracking method and device, computer device and readable storage medium
CN111507126A (en) * 2019-01-30 2020-08-07 杭州海康威视数字技术股份有限公司 Alarming method and device of driving assistance system and electronic equipment
WO2020186444A1 (en) * 2019-03-19 2020-09-24 深圳市大疆创新科技有限公司 Object detection method, electronic device, and computer storage medium
CN111723608A (en) * 2019-03-20 2020-09-29 杭州海康威视数字技术股份有限公司 Alarming method and device of driving assistance system and electronic equipment
CN110472553A (en) * 2019-08-12 2019-11-19 北京易航远智科技有限公司 Target tracking method, computing device and the medium of image and laser point cloud fusion
CN110688902A (en) * 2019-08-30 2020-01-14 智慧互通科技有限公司 Method and device for detecting vehicle area in parking space
CN111291786A (en) * 2020-01-17 2020-06-16 清华大学 Vehicle-mounted vision real-time multi-target course angle estimation method and device
CN111967360A (en) * 2020-08-06 2020-11-20 苏州易航远智智能科技有限公司 Target vehicle attitude detection method based on wheels

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115857502A (en) * 2022-11-30 2023-03-28 上海木蚁机器人科技有限公司 Travel control method and electronic device
CN115857502B (en) * 2022-11-30 2023-12-12 上海木蚁机器人科技有限公司 Driving control method and electronic device

Similar Documents

Publication Publication Date Title
KR102335389B1 (en) Deep Learning-Based Feature Extraction for LIDAR Position Estimation of Autonomous Vehicles
CN114080634B (en) Proxy trajectory prediction using anchor trajectories
CN111771141B (en) LIDAR positioning for solution inference using 3D CNN network in autonomous vehicles
KR102350181B1 (en) LIDAR Position Estimation Using RNN and LSTM to Perform Temporal Smoothing in Autonomous Vehicles
Homm et al. Efficient occupancy grid computation on the GPU with lidar and radar for road boundary detection
CN108509820B (en) Obstacle segmentation method and device, computer equipment and readable medium
US20200307568A1 (en) Vehicle driving support system
US11414075B2 (en) Vehicle driving assistance system and method
US20230386076A1 (en) Target detection method, storage medium, electronic device, and vehicle
CN114902190A (en) Hardware controlled updating of physical operating parameters for field fault detection
US20200233425A1 (en) Vehicle driving assistance system and vehicle driving assistance method
WO2021097431A1 (en) Spatio-temporal-interactive networks
CN114815851A (en) Robot following method, robot following device, electronic device, and storage medium
US20200310453A1 (en) Vehicle driving support system
CN115311646A (en) Method and device for detecting obstacle
CN115346192A (en) Data fusion method, system, equipment and medium based on multi-source sensor perception
CN112505652B (en) Target detection method, device and storage medium
US11105924B2 (en) Object localization using machine learning
CN112800873A (en) Method, device and system for determining target direction angle and storage medium
Gao et al. Incomplete road information imputation using parallel interpolation to enhance the safety of autonomous driving
CN113111787A (en) Target detection method, device, equipment and storage medium
CN111912418A (en) Method, device and medium for deleting obstacles in non-driving area of mobile carrier
US20190311532A1 (en) Method and Apparatus for Uncertainty Modeling of Point Cloud
US20220319054A1 (en) Generating scene flow labels for point clouds using object labels
Mochurad Implementation and analysis of a parallel kalman filter algorithm for lidar localization based on CUDA technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215123 g2-1901 / 1902 / 2002, No. 88, Jinjihu Avenue, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant after: Zhixing Automotive Technology (Suzhou) Co.,Ltd.

Address before: 215123 g2-1901 / 1902 / 2002, No. 88, Jinjihu Avenue, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant before: IMOTION AUTOMOTIVE TECHNOLOGY (SUZHOU) Co.,Ltd.