WO2020147500A1 - 超声波阵列障碍物检测结果处理方法及系统 - Google Patents

超声波阵列障碍物检测结果处理方法及系统 Download PDF

Info

Publication number
WO2020147500A1
WO2020147500A1 PCT/CN2019/126392 CN2019126392W WO2020147500A1 WO 2020147500 A1 WO2020147500 A1 WO 2020147500A1 CN 2019126392 W CN2019126392 W CN 2019126392W WO 2020147500 A1 WO2020147500 A1 WO 2020147500A1
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasonic sensor
obstacle
neural network
network model
collected
Prior art date
Application number
PCT/CN2019/126392
Other languages
English (en)
French (fr)
Inventor
朱晓星
刘祥
杨凡
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Priority to EP19910456.3A priority Critical patent/EP3839564B1/en
Priority to US17/278,248 priority patent/US11933921B2/en
Priority to JP2021518846A priority patent/JP7185811B2/ja
Publication of WO2020147500A1 publication Critical patent/WO2020147500A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52004Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2015/937Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles sensor installation details
    • G01S2015/938Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles sensor installation details in the bumper area

Definitions

  • This application relates to the field of automatic control, and in particular to a method and system for processing ultrasonic array obstacle detection results.
  • An unmanned vehicle is a kind of intelligent car, which can also be called a wheeled mobile robot, which mainly relies on the intelligent driving instrument based on the computer system in the vehicle to realize unmanned driving.
  • Unmanned vehicles integrate many technologies such as automatic control, architecture, artificial intelligence, and visual computing. It is a product of the highly developed computer science, pattern recognition and intelligent control technology. It is also an important indicator of a country’s scientific research strength and industrial level. , Has broad application prospects in the fields of national defense and national economy.
  • Ultrasonic radar is often loaded on unmanned vehicles to achieve obstacle avoidance functions.
  • its detection accuracy is not high, and it is easy to cause false detection and missed detection.
  • the existing methods of using regular voting to detect and correct the detection results of ultrasonic arrays are difficult to achieve. High precision.
  • Various aspects of the present application provide a method and system for processing ultrasonic array obstacle detection results, which are used to improve the accuracy of ultrasonic array obstacle detection, avoid false detections and missed detections, and improve driving safety.
  • a method for processing obstacle detection results of an ultrasonic array including:
  • the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array are processed according to the true and false identification.
  • an implementation manner is further provided, and the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array are input into the pre-trained neural network model including:
  • the data sequence is input into a pre-trained neural network model.
  • an implementation manner is further provided, and the neural network model is trained by the following method:
  • the obstacle coordinates collected by the unified lidar are marked as true and false, and training samples are generated;
  • the true and false labeling of the obstacle coordinates collected by each ultrasonic sensor according to the obstacle coordinates collected by the unified lidar includes:
  • the error between the obstacle coordinates collected by the lidar and the obstacle coordinates collected by the ultrasonic sensor is within the preset threshold range, it is marked as true, and if it is greater than the preset threshold range, it is marked as false.
  • the generating training samples includes:
  • the feature values of the obstacle coordinates collected by the ultrasonic sensor to be processed and N adjacent ultrasonic sensors and the true and false labels of the obstacle coordinates collected by the ultrasonic sensor to be processed are generated to generate training samples.
  • the neural network model is a convolutional neural network model.
  • the neural network model is a long- and short-term memory neural network model.
  • the training a neural network model according to the training sample includes:
  • the long and short-term memory neural network model is trained by using multiple training samples obtained at consecutive times.
  • an ultrasonic array obstacle detection result processing system including:
  • Obtaining module used to obtain obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array
  • the neural network identification module is used to input the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array into a pre-trained neural network model to obtain the true output of the pre-trained neural network model for the obstacle coordinates collected by each ultrasonic sensor Fake logo
  • the processing module processes the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array according to the true and false identification.
  • an implementation manner is further provided, and the neural network identification module is specifically used for:
  • the data sequence is input into a pre-trained neural network model.
  • the system further includes a neural network training module for:
  • the obstacle coordinates collected by the unified lidar are marked as true and false, and training samples are generated;
  • the true and false labeling of the obstacle coordinates collected by each ultrasonic sensor according to the obstacle coordinates collected by the unified lidar includes:
  • the error between the obstacle coordinates collected by the lidar and the obstacle coordinates collected by the ultrasonic sensor is within the preset threshold range, it is marked as true, and if it is greater than the preset threshold range, it is marked as false.
  • the generating training samples includes:
  • the feature values of the obstacle coordinates collected by the ultrasonic sensor to be processed and N adjacent ultrasonic sensors and the true and false labels of the obstacle coordinates collected by the ultrasonic sensor to be processed are generated to generate training samples.
  • the neural network model is a convolutional neural network model.
  • the neural network model is a long- and short-term memory neural network model.
  • the training a neural network model according to the training sample includes:
  • the long and short-term memory neural network model is trained by using multiple training samples obtained at consecutive times.
  • a computer device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the processor executes the program as described above. The method described.
  • Another aspect of the present invention provides a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the method as described above is implemented.
  • the embodiments of the present application can improve the accuracy of ultrasonic array obstacle detection, avoid false detections and missed detections, and improve driving safety.
  • FIG. 1 is a schematic flowchart of a method for processing an ultrasonic array obstacle detection result according to an embodiment of the application
  • FIG. 2 is a schematic flowchart of a neural network model training method in a method for processing obstacle detection results of an ultrasonic array provided by an embodiment of the application;
  • FIG. 3 is a schematic structural diagram of an ultrasonic array obstacle detection result correction system provided by an embodiment of the application.
  • FIG. 4 is a schematic structural diagram of a training module of an ultrasonic array obstacle detection result correction system provided by an embodiment of the application;
  • Figure 5 shows a block diagram of an exemplary computer system/server 012 suitable for implementing embodiments of the present invention.
  • FIG. 1 is a schematic flowchart of a method for processing an ultrasonic array obstacle detection result provided by an embodiment of the application, as shown in FIG. 1, including:
  • Step S11 Obtain obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array
  • Step S12 Input the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array into a pre-trained neural network model, and obtain the true and false identification of the obstacle coordinates collected by each ultrasonic sensor and output by the pre-trained neural network model;
  • Step S13 Process the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array according to the true and false identification.
  • step S11 Preferably, in a preferred implementation of step S11,
  • the ultrasonic sensors in the ultrasonic sensor array are uniformly distributed on the front bumper of the vehicle, the coordinate systems of the ultrasonic sensors are different, and the obstacle coordinates collected by the ultrasonic sensors need to be unified into the reference coordinate system.
  • the coordinates of each ultrasonic sensor can be converted into the vehicle coordinate system.
  • step S12 In a preferred implementation of step S12,
  • the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array are input into a pre-trained neural network model, and the true and false identifications for the obstacle coordinates collected by each ultrasonic sensor are obtained by the pre-trained neural network model.
  • the ultrasonic sensor array includes 10 ultrasonic sensors evenly distributed on the front bumper of the vehicle.
  • the ultrasonic sensors on the right did not collect obstacle information.
  • the pre-trained neural network is a CNN convolutional neural network.
  • the pre-trained neural network is an LSTM long short-term memory neural network.
  • the data sequence at the t-th time is substituted into the LSTM long and short-term memory neural network to calculate the LSTM result at the t-th time and determine the true or false of the obstacle coordinates collected by the selected ultrasonic sensor.
  • the LSTM long short-term memory neural network receives the data sequence at time t, at this time there are already LSTM hidden layer states from time t-1 to time tn, and the LSTM hidden layer state can be obtained from the first time according to preset rules.
  • the LSTM hidden layer state from time t-1 to time tn selects the LSTM hidden layer state that satisfies the preset rule as the selection result, and is used to calculate the LSTM result at time t.
  • the preset rule may include but is not limited to: selecting at least one most different LSTM hidden layer state from the LSTM hidden layer state from time t-1 to time tn as Selection result; and/or, using the sparsity of L0-norm, select at least one LSTM hidden layer state from the LSTM hidden layer state at time t-1 to time tn; and/or, according to manual experience At least one LSTM hidden layer state is selected from the LSTM hidden layer state from time t-1 to time tn. It can be understood that the above-mentioned preset rules can also design corresponding selection rules according to actual needs (such as new tasks).
  • step S13 In a preferred implementation of step S13,
  • the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array are processed according to the true and false identification.
  • the obstacle coordinates collected by the ultrasonic sensor to be processed are retained; if the identification result output by the pre-trained neural network is false, the ultrasonic sensor collection to be processed is discarded The obstacle coordinates.
  • obstacle avoidance is performed according to the processed obstacle coordinates.
  • a CNN convolutional neural network and an LSTM long short-term memory neural network can be used to output the identification results respectively, and a weighted sum is performed to determine the final identification result.
  • the neural network model is trained through the following steps:
  • Step S21 Construct an obstacle test scene
  • the obstacle test scene is a simple scene, which is relatively clean.
  • the ultrasonic array only a single obstacle is set, so that during the test of the ultrasonic array, the ultrasonic sensors return the same one. Obstacle information;
  • Step S22 Collect the coordinates of the obstacles collected by the ultrasonic sensor array and the coordinates of the same obstacles collected by the laser radar and unify them into the reference coordinate system;
  • each ultrasonic sensor in the ultrasonic sensor array is evenly distributed on the front bumper of the vehicle;
  • the lidar is a single-line lidar, installed at the center of the front bumper of the vehicle, and the lidar is used to collect obstacles
  • the precise distance data can be used as the true value of the obstacle coordinates collected by the ultrasonic sensor to mark it.
  • each ultrasonic sensor in the ultrasonic sensor array is evenly distributed on the front bumper of the vehicle, and the lidar is installed in the center of the front bumper of the vehicle, it is necessary to compare the coordinates of the ultrasonic sensor coordinate system with the lidar coordinate system Convert the coordinates in to the reference coordinate system.
  • the coordinates in the laser radar coordinate system and the coordinates in the ultrasonic radar coordinate system can be uniformly converted to the vehicle coordinate system.
  • the initial spatial configuration of the ultrasonic sensors and the lidar is known in advance, and can be obtained based on the measurement data of the ultrasonic sensors and the lidar on the body of the unmanned vehicle.
  • the coordinates of the obstacle in the coordinate system of each ultrasonic sensor and the coordinate in the coordinate system of the lidar are converted to the vehicle coordinate system.
  • Step S23 Mark the obstacle coordinates collected by each ultrasonic sensor according to the obstacle coordinates collected by the unified lidar to generate training samples;
  • the obstacle coordinates collected by each ultrasonic sensor are labeled according to the obstacle coordinates collected by the unified lidar, and the label for marking the collected obstacle coordinates is true or false.
  • the label for marking the collected obstacle coordinates is true or false.
  • the ultrasonic sensor 1 in the ultrasonic array if the error between the distance data returned by the lidar and the distance data returned by the ultrasonic sensor 1 is within the preset threshold range, it is marked as true, and if it is greater than the preset threshold range, it is marked as false. It is believed that the ultrasonic sensor has a false detection or missed detection.
  • the preset threshold range is 20 cm, which can cover about 99% of the output.
  • N is a positive integer smaller than the total number of ultrasonic sensors in the ultrasonic array.
  • the ultrasonic sensor array includes 10 ultrasonic sensors evenly distributed on the front bumper of the vehicle.
  • the feature values of N adjacent ultrasonic sensors and the label of the ultrasonic sensor 1 are used to generate training samples.
  • the label marked as true is a positive sample
  • the label marked as false is a negative sample.
  • the feature values of the adjacent N ultrasonic sensors of the ultrasonic sensor 1 and the label of the ultrasonic sensor 1 at consecutive times are generated as training samples.
  • test samples for each ultrasonic sensor can be automatically obtained, without manual marking by laser rangefinders and other equipment, which greatly improves the speed and accuracy of obtaining test samples.
  • Step S24 Training the neural network model according to the training samples.
  • the neural network model is a CNN convolutional neural network model.
  • a SGD (Stochastic Gradient Descent) training algorithm is used to train multiple sample pairs to finally obtain the CNN model parameters.
  • the feature values and labels of the adjacent N ultrasonic sensors are respectively used to generate training samples to train the CNN convolutional neural network model.
  • the obstacle detection results of different ultrasonic sensors to be corrected are labeled with their corresponding trained CNN convolutional neural network model, and the corresponding label labeling result is the obstacle If the label of the detection result is true, the obstacle avoidance will be carried out according to the obstacle coordinates collected; if the identification result output by the pre-trained neural network is false, the collection result will be discarded, and the new collection result shall prevail for obstacle avoidance .
  • the neural network model is an LSTM long short-term memory neural network model.
  • the LSTM long and short-term memory neural network model is trained using multiple labeled training samples obtained at consecutive moments.
  • the training samples are used to train the LSTM long and short-term memory neural network model, and the training values of the parameters of the LSTM long- and short-term memory neural network are determined from the initial values of the parameters by optimizing (ie, maximizing or minimizing) the objective function.
  • the result returned by the lidar is used as the true value of the ultrasonic array detection to mark the ultrasonic array obstacle detection result, and the marked obstacle detection result is used as the training sample to train the neural network model. Correct the ultrasonic obstacle detection results according to the trained neural network model, improve the accuracy of ultrasonic array obstacle detection, avoid misdetection and missed detection, and improve driving safety.
  • FIG. 3 is a schematic structural diagram of an ultrasonic array obstacle detection result correction system provided by an embodiment of the application, as shown in FIG. 3, including:
  • the acquiring module 31 is used to acquire obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array;
  • the neural network identification module 32 is used to input the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array into a pre-trained neural network model to obtain the output of the pre-trained neural network model for the obstacle coordinates collected by each ultrasonic sensor True and false identification;
  • the processing module 33 processes the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array according to the true and false identification.
  • the acquiring module 31 acquires the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array.
  • the ultrasonic sensors in the ultrasonic sensor array are evenly distributed on the front bumper of the vehicle, the coordinate systems of the ultrasonic sensors are different, and the obstacle coordinates collected by the ultrasonic sensors need to be unified into the reference coordinate system.
  • the coordinates of each ultrasonic sensor can be converted into the vehicle coordinate system.
  • the neural network identification module 32 inputs the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array into the pre-trained neural network model, and obtains the output of the pre-trained neural network model for the obstacle coordinates collected by each ultrasonic sensor. True and false identification.
  • the ultrasonic sensor array includes 10 ultrasonic sensors evenly distributed on the front bumper of the vehicle.
  • the ultrasonic sensors on the right did not collect obstacle information.
  • the pre-trained neural network is a CNN convolutional neural network.
  • the pre-trained neural network is an LSTM long short-term memory neural network.
  • the data sequence at the t-th time is substituted into the LSTM long- and short-term memory neural network to calculate the LSTM result at the t-th time, and determine the true or false of the obstacle coordinates collected by the selected ultrasonic sensor.
  • the LSTM long short-term memory neural network receives the data sequence at time t, there are already LSTM hidden layer states from time t-1 to time tn at this time, and the LSTM hidden layer state from time t-1 to time tn can be obtained according to preset rules.
  • the LSTM hidden layer state that satisfies the preset rule is selected from the LSTM hidden layer state from time t-1 to time tn as the selection result to be used to calculate the LSTM result at time t.
  • the preset rule may include but is not limited to: selecting at least one most different LSTM hidden layer state from the LSTM hidden layer state from time t-1 to time tn as Selection result; and/or, using the sparsity of L0-norm, select at least one LSTM hidden layer state from the LSTM hidden layer state at time t-1 to time tn; and/or, according to manual experience At least one LSTM hidden layer state is selected from the LSTM hidden layer state from time t-1 to time tn. It can be understood that the above-mentioned preset rules can also design corresponding selection rules according to actual needs (such as new tasks).
  • processing module 33 In a preferred implementation of the processing module 33,
  • the processing module 33 is configured to process the obstacle coordinates collected by each ultrasonic sensor in the ultrasonic sensor array according to the true and false identification.
  • the obstacle coordinates collected by the ultrasonic sensor to be processed are retained; if the identification result output by the pre-trained neural network is false, the ultrasonic sensor collection to be processed is discarded The obstacle coordinates.
  • obstacle avoidance is performed according to the processed obstacle coordinates.
  • a CNN convolutional neural network and an LSTM long short-term memory neural network can be used to output the identification results respectively, and a weighted sum is performed to determine the final identification result.
  • the system also includes a training module (not shown in the figure) for training the neural network model, and the training module includes the following sub-modules:
  • the construction sub-module 41 is used to construct obstacle test scenarios
  • the obstacle test scene is a simple scene, which is relatively clean.
  • the ultrasonic array only a single obstacle is set, so that during the test of the ultrasonic array, the ultrasonic sensors return the same one. Obstacle information;
  • the collection sub-module 43 is used to collect the coordinates of the obstacles collected by the ultrasonic sensor array and the coordinates of the same obstacles collected by the laser radar and unified them into the reference coordinate system;
  • each ultrasonic sensor in the ultrasonic sensor array is evenly distributed on the front bumper of the vehicle;
  • the lidar is a single-line lidar, installed at the center of the front bumper of the vehicle, and the lidar is used to collect obstacles
  • the precise distance data can be used as the true value of the obstacle coordinates collected by the ultrasonic sensor to mark it.
  • each ultrasonic sensor in the ultrasonic sensor array is evenly distributed on the front bumper of the vehicle, and the lidar is installed in the center of the front bumper of the vehicle, it is necessary to compare the coordinates of the ultrasonic sensor coordinate system with the lidar coordinate system. Convert the coordinates in to the reference coordinate system.
  • the coordinates in the laser radar coordinate system and the coordinates in the ultrasonic radar coordinate system can be uniformly converted to the vehicle coordinate system.
  • the initial spatial configuration of the ultrasonic sensors and the lidar is known in advance, and can be obtained based on the measurement data of the ultrasonic sensors and the lidar on the body of the unmanned vehicle.
  • the coordinates of the obstacle in the coordinate system of each ultrasonic sensor and the coordinate in the coordinate system of the lidar are converted to the vehicle coordinate system.
  • the generation sub-module 43 is used to label the obstacle coordinates collected by each ultrasonic sensor according to the obstacle coordinates collected by the unified lidar to generate training samples;
  • the obstacle coordinates collected by each ultrasonic sensor are labeled according to the obstacle coordinates collected by the unified lidar, and the label for marking the collected obstacle coordinates is true or false.
  • the label for marking the collected obstacle coordinates is true or false.
  • the ultrasonic sensor 1 in the ultrasonic array if the error between the distance data returned by the lidar and the distance data returned by the ultrasonic sensor 1 is within the preset threshold range, it is marked as true, and if it is greater than the preset threshold range, it is marked as false. It is believed that the ultrasonic sensor has a false detection or missed detection.
  • the preset threshold range is 20 cm, which can cover about 99% of the output.
  • N is a positive integer smaller than the total number of ultrasonic sensors in the ultrasonic array.
  • the ultrasonic sensor array includes 10 ultrasonic sensors evenly distributed on the front bumper of the vehicle.
  • the feature values of N adjacent ultrasonic sensors and the label of the ultrasonic sensor 1 are used to generate training samples.
  • the label marked as true is a positive sample
  • the label marked as false is a negative sample.
  • the feature values of the adjacent N ultrasonic sensors of the ultrasonic sensor 1 and the label of the ultrasonic sensor 1 at consecutive times are generated as training samples.
  • test samples for each ultrasonic sensor can be automatically obtained, without manual marking by laser rangefinders and other equipment, which greatly improves the speed and accuracy of obtaining test samples.
  • the training submodule 44 is used to train the neural network model according to the training samples.
  • the neural network model is a CNN convolutional neural network model.
  • a SGD (Stochastic Gradient Descent) training algorithm is used to train multiple sample pairs to finally obtain the CNN model parameters.
  • the feature values of the adjacent N ultrasonic sensors and their labels are used to generate training samples to train the CNN convolutional neural network model.
  • the obstacle detection results of different ultrasonic sensors to be corrected are labeled with their corresponding trained CNN convolutional neural network model, and the corresponding label labeling result is the obstacle If the label of the detection result is true, the obstacle avoidance will be carried out according to the obstacle coordinates collected; if the identification result output by the pre-trained neural network is false, the collection result will be discarded, and the new collection result shall prevail for obstacle avoidance .
  • the neural network model is an LSTM long short-term memory neural network model.
  • the LSTM long and short-term memory neural network model is trained using multiple labeled training samples obtained at consecutive moments.
  • the training samples are used to train the LSTM long and short-term memory neural network model, and the training values of the parameters of the LSTM long- and short-term memory neural network are determined from the initial values of the parameters by optimizing (ie, maximizing or minimizing) the objective function.
  • the result returned by the lidar is used as the true value of the ultrasonic array detection to mark the ultrasonic array obstacle detection result, and the marked obstacle detection result is used as the training sample to train the neural network model. Correct the ultrasonic obstacle detection results according to the trained neural network model, improve the accuracy of ultrasonic array obstacle detection, avoid misdetection and missed detection, and improve driving safety.
  • the disclosed method and device can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical, or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • FIG. 5 shows a block diagram of an exemplary computer system/server 012 suitable for implementing embodiments of the present invention.
  • the computer system/server 012 shown in FIG. 5 is only an example, and should not bring any limitation to the function and application scope of the embodiment of the present invention.
  • the computer system/server 012 is represented in the form of a general-purpose computing device.
  • the components of the computer system/server 012 may include, but are not limited to: one or more processors or processing units 016, a system memory 028, and a bus 018 connecting different system components (including the system memory 028 and the processing unit 016).
  • the bus 018 represents one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of the multiple bus structures.
  • these architectures include, but are not limited to, industry standard architecture (ISA) bus, microchannel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and peripheral component interconnection ( PCI) bus.
  • ISA industry standard architecture
  • MAC microchannel architecture
  • VESA Video Electronics Standards Association
  • PCI peripheral component interconnection
  • the computer system/server 012 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by the computer system/server 012, including volatile and nonvolatile media, removable and non-removable media.
  • the system memory 028 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 030 and/or cache memory 032.
  • the computer system/server 012 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • the storage system 034 can be used to read and write non-removable, non-volatile magnetic media (not shown in FIG. 5, usually referred to as a "hard drive").
  • a disk drive for reading and writing to a removable non-volatile disk such as a "floppy disk”
  • a removable non-volatile disk such as CD-ROM, DVD-ROM
  • other optical media read and write optical disc drives.
  • each drive can be connected to the bus 018 through one or more data media interfaces.
  • the memory 028 may include at least one program product, and the program product has a set (for example, at least one) program modules, which are configured to perform the functions of the embodiments of the present invention.
  • a program/utility tool 040 with a set of (at least one) program module 042 can be stored in, for example, the memory 028.
  • Such program module 042 includes, but is not limited to, an operating system, one or more application programs, and other programs Modules and program data, each of these examples or some combination may include the realization of a network environment.
  • the program module 042 generally executes the functions and/or methods in the described embodiments of the present invention.
  • the computer system/server 012 can also communicate with one or more external devices 014 (such as a keyboard, pointing device, display 024, etc.).
  • the computer system/server 012 communicates with an external radar device, and can also communicate with one or Multiple devices that enable the user to interact with the computer system/server 012, and/or communicate with any device that enables the computer system/server 012 to communicate with one or more other computing devices (such as network cards, modems, etc.) Communication. This communication can be performed through an input/output (I/O) interface 022.
  • I/O input/output
  • the computer system/server 012 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 020.
  • networks such as a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
  • the network adapter 020 communicates with other modules of the computer system/server 012 through the bus 018.
  • other hardware and/or software modules can be used in conjunction with the computer system/server 012, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems , Tape drives and data backup storage systems.
  • the processing unit 016 executes the functions and/or methods in the described embodiments of the present invention by running a program stored in the system memory 028.
  • the above-mentioned computer program may be set in a computer storage medium, that is, the computer storage medium is encoded with a computer program.
  • the program When the program is executed by one or more computers, one or more computers can execute the operations shown in the above embodiments of the present invention. Method flow and/or device operation.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above.
  • computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium containing or storing a program, which may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • the computer program code for performing the operations of the present invention can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages such as Java, Smalltalk, C++, as well as conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass the Internet connection).
  • LAN local area network
  • WAN wide area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

提供了一种超声波阵列障碍物检测结果处理方法及系统,该方法包括获取超声波传感器阵列中各超声波传感器采集的障碍物坐标(S11);将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识(S12);根据真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理(S13)。该方法提高了超声波阵列障碍物检测准确性,避免误检和漏检,提高了行驶安全性。

Description

超声波阵列障碍物检测结果处理方法及系统
本申请要求了申请日为2019年01月15日,申请号为201910034451.X发明名称为“一种超声波阵列障碍物检测结果处理方法及系统”的中国专利申请的优先权。
技术领域
本申请涉及自动控制领域,尤其涉及一种超声波阵列障碍物检测结果处理方法及系统。
背景技术
无人驾驶车辆是一种智能汽车,也可以称之为轮式移动机器人,主要依靠车辆内的以计算机系统为主的智能驾驶仪来实现无人驾驶。无人驾驶汽车集自动控制、体系结构、人工智能、视觉计算等众多技术于一体,是计算机科学、模式识别和智能控制技术高度发展的产物,也是衡量一个国家科研实力和工业水平的一个重要标志,在国防和国民经济领域具有广阔的应用前景。
超声波雷达往往被加载在无人驾驶车辆上实现避障功能。但是,由于超声波雷达自身的工作原理,造成其检测精度不高,容易造成误检和漏检,现有的采用规则投票对超声波阵列的检测结果进行误检漏检检测及校正的方法难以达到较高精度。
发明内容
本申请的多个方面提供一种超声波阵列障碍物检测结果处理方法及系统,用以提高超声波阵列障碍物检测准确性,避免误检和漏检,提高行驶安全性。
本申请的一方面,提供一种超声波阵列障碍物检测结果处理方法,包括:
获取超声波传感器阵列中各超声波传感器采集的障碍物坐标;
将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到所述预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识;
根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型包括:
选择待处理的超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标的特征值生成数据序列,其中,N为小于超声波传感器阵列总数的正整数;
将所述数据序列输入预先训练的神经网络模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述神经网络模型通过以下方法进行训练:
构造障碍物测试场景;
获取超声波传感器阵列中各超声波传感器采集的障碍物的坐标以及激光雷达对相同障碍物采集的坐标,并统一到参考坐标系中;
根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行真假标注,生成训练样本;
根据所述训练样本对神经网络模型进行训练。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行真假标注包括:
若激光雷达采集的障碍物坐标与超声波传感器采集的障碍物坐标的误差在预设阈值范围内,则标注为真,若大于预设阈值范围,则标注为假。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述生成训练样本包括:
将待处理的超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标的特征值以及针对待处理的超声波传感器采集的障碍物坐标的真假标注生成训练样本。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述神经网络模型为卷积神经网络模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述神经网络模型为长短期记忆神经网络模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述根据所述训练样本对神经网络模型进行训练包括:
利用连续时刻获得的多个训练样本对所述长短期记忆神经网络模型进行训练。
本申请的另一方面,提供一种超声波阵列障碍物检测结果处理系统,包括:
获取模块,用于获取超声波传感器阵列中各超声波传感器采集的障碍物坐标;
神经网络标识模块,用于将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到所述预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识;
处理模块,根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述神经网络标识模块具体用于:
选择待处理的超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标的特征值生成数据序列,其中,N为小于超声波传感器阵列总数的正整数;
将所述数据序列输入预先训练的神经网络模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述系统还包括神经网络训练模块,用于:
构造障碍物测试场景;
获取超声波传感器阵列中各超声波传感器采集的障碍物的坐标以及激光雷达对相同障碍物采集的坐标,并统一到参考坐标系中;
根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行真假标注,生成训练样本;
根据所述训练样本对神经网络模型进行训练。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行真假标注包括:
若激光雷达采集的障碍物坐标与超声波传感器采集的障碍物坐标的 误差在预设阈值范围内,则标注为真,若大于预设阈值范围,则标注为假。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述生成训练样本包括:
将待处理的超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标的特征值以及针对待处理的超声波传感器采集的障碍物坐标的真假标注生成训练样本。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述神经网络模型为卷积神经网络模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述神经网络模型为长短期记忆神经网络模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述根据所述训练样本对神经网络模型进行训练包括:
利用连续时刻获得的多个训练样本对所述长短期记忆神经网络模型进行训练。
本发明的另一方面,提供一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如以上所述的方法。
本发明的另一方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现如以上所述的方法。
由所述技术方案可知,本申请实施例可以提高超声波阵列障碍物检测准确性,避免误检和漏检,提高行驶安全性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例提供的超声波阵列障碍物检测结果处理方法的流程示意图;
图2为本申请一实施例提供的超声波阵列障碍物检测结果处理方法中神经网络模型训练方法的流程示意图;
图3为本申请一实施例提供的超声波阵列障碍物检测结果校正系统的结构示意图;
图4为本申请一实施例提供的超声波阵列障碍物检测结果校正系统的训练模块的结构示意图;
图5示出了适于用来实现本发明实施方式的示例性计算机系统/服务器012的框图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的全部其他实施例,都属于本申请保护的范围。
图1为本申请一实施例提供的超声波阵列障碍物检测结果处理方法的流程示意图,如图1所示,包括:
步骤S11、获取超声波传感器阵列中各超声波传感器采集的障碍物 坐标;
步骤S12、将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到所述预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识;
步骤S13、根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理。
优选地,在步骤S11的一种优选实现方式中,
获取超声波传感器阵列中各超声波传感器采集的障碍物坐标。
优选地,在本实施例中,仅考虑在超声波传感器阵列的视野中存在单个障碍物的情况,场景比较干净,以便各超声波传感器所返回的为针对同一个障碍物所采集的障碍物坐标。
优选地,由于所述超声波传感器阵列中各个超声波传感器于车辆前方保险杠上均匀分布,因此各超声波传感器的坐标系是不同的,需要将各超声波传感器采集的障碍物坐标统一到参考坐标系中。本实施例中,可以将各超声波传感器的坐标转换到车辆坐标系中。
在步骤S12的一种优选实现方式中,
将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到所述预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识。
优选地,所述超声波传感器阵列包括10个均匀分布在车辆前方保险杠上的超声波传感器,在实际应用当中,一般最多会有4-6个相邻的超声波传感器可以采集到同一障碍物坐标。例如,当障碍物出现在车辆左前方,可能只有分布在车辆前方保险杠左侧的4个超声波传感器采集到 了所述障碍物的坐标,返回了障碍物信息,而分布在车辆前方保险杠中央和右侧的6个超声波传感器则未采集到障碍物信息。
在本实施例的一个优选实施例中,
从各超声波传感器采集的障碍物坐标中选择一个超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标生成数据序列,其中,N为小于超声波传感器阵列总数的正整数,将所述数据序列输入预先训练的神经网络;得到预先训练的神经网络输出的标识结果,确定所选的一个超声波传感器采集的障碍物坐标的真假。所述预先训练的神经网络为CNN卷积神经网络。
在本实施例的另一个优选实施例中,
从各超声波传感器采集的障碍物坐标中选择一个超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标生成数据序列,其中,N为小于超声波传感器阵列总数的正整数,将第t时刻所述数据序列输入预先训练的神经网络;得到预先训练的神经网络输出的第t时刻的标识结果,确定所选的一个超声波传感器采集的障碍物坐标的真假。所述预先训练的神经网络为LSTM长短期记忆神经网络。优选地,将第t时刻的数据序列代入LSTM长短期记忆神经网络以计算出该第t时刻的LSTM结果,确定所选的一个超声波传感器采集的障碍物坐标的真假。优选地,在LSTM长短期记忆神经网络接收到第t时刻的数据序列的同时,此时已经存有第t-1时刻至第t-n时刻的LSTM隐含层状态,可以根据预设规则从该第t-1时刻至第t-n时刻的LSTM隐含层状态种选择满足该预设规则的LSTM隐含层状态作为选择结果,以用于计算第t时刻的LSTM结果。
其中,在本发明的实施例中,该预设规则可包括但不限于:从第t-1时刻至第t-n时刻的LSTM隐含层状态中选择出至少一个最大不同的LSTM隐含层状态作为选择结果;和/或,采用L0-范数的稀疏性,从第t-1时刻至第t-n时刻的LSTM隐含层状态中选择出至少一个LSTM隐含层状态;和/或,根据人工经验从第t-1时刻至第t-n时刻的LSTM隐含层状态中选择出至少一个LSTM隐含层状态。可以理解,上述预设规则还可以根据实际需求(如新的任务)设计相应的选择规则。
在步骤S13的一种优选实现方式中,
根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理。
优选地,若预先训练的神经网络输出的标识结果为真,则保留待处理的超声波传感器采集的障碍物坐标;若预先训练的神经网络输出的标识结果为假,则抛弃待处理的超声波传感器采集的障碍物坐标。
优选地,根据处理后的障碍物坐标进行避障。
优选地,可以同时采用CNN卷积神经网络和LSTM长短期记忆神经网络分别输出标识结果,并进行加权求和,以确定最终的标识结果。
如图2所示,所述神经网络模型通过以下步骤进行训练:
步骤S21、构造障碍物测试场景;
优选地,所述障碍物测试场景为简单场景,比较干净,在超声波阵列的视野中,只设置单个障碍物,以便对所述超声波阵列进行测试的过程中,各超声波传感器所返回的为同一个障碍物的信息;
步骤S22、采集超声波传感器阵列采集的障碍物的坐标以及激光雷达对相同障碍物采集的坐标并统一到参考坐标系中;
优选地,所述超声波传感器阵列中各个超声波传感器与车辆前方保险杠上均匀分布;所述激光雷达为单线激光雷达,安装于车辆前方保险杠的中央位置,所述激光雷达用于采集障碍物的精确距离数据,以便作为超声波传感器采集的障碍物坐标的真值,对其进行标注。
优选地,由于所述超声波传感器阵列中各个超声波传感器与车辆前方保险杠上均匀分布,激光雷达安装于车辆前方保险杠的中央位置,因此需要将各超声波传感器坐标系中的坐标与激光雷达坐标系中的坐标转换到参考坐标系中。本实施例中,可以将激光雷达坐标系中的坐标、超声波雷达坐标系中的坐标统一转换到车辆坐标系中。
所述各超声波传感器与激光雷达的初始空间配置是事先已知的,可以根据所述各超声波传感器与激光雷达在无人驾驶车辆车体上的测量数据得到。将障碍物在各超声波传感器坐标系中的坐标与在激光雷达坐标系中的坐标转换到车辆坐标系中。
步骤S23、根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行标注,生成训练样本;
优选地,根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行标注,标注其采集的障碍物坐标的label为真或为假。例如,
对于超声波阵列中的超声波传感器一,若激光雷达返回的距离数据与超声波传感器一返回的距离数据的误差在预设阈值范围内,则标注为真,若大于预设阈值范围,则标注为假,认为所述超声波传感器一出现了误检或漏检。优选地,所述预设阈值范围为20cm,可以覆盖大约99%的输出。
确定超声波传感器一相邻的N个超声波传感器采集的障碍物坐标的特征值。优选地,N为小于超声波阵列的超声波传感器总数的正整数。优选地,所述超声波传感器阵列包括10个均匀分布在车辆前方保险杠上的超声波传感器,在实际应用当中,一般最多会有4-6个相邻的超声波传感器可以采集到同一障碍物坐标。例如,当障碍物出现在车辆左前方,可能只有分布在车辆前方保险杠左侧的4个超声波传感器采集到了所述障碍物的坐标,返回了障碍物信息,而分布在车辆前方保险杠中央和右侧的6个超声波传感器则未采集到障碍物信息。因此,N可以选择为4个,虽然N可以取更大的数值,但是,更多的传感器所采集的障碍物坐标与传感器一所采集的障碍物坐标的相关程度较低,只会增加运算量,因此,一般选择为4个即可。
选择超声波传感器一相邻的N个超声波传感器采集的障碍物坐标,通过对所述障碍物坐标进行特征变化,如取平方、开平方等获取其特征值。
优选地,将超声波传感器一相邻的N个超声波传感器的特征值以及超声波传感器一的label生成训练样本。
其中,label标注为真的为正样本,label标注为假的为负样本。
优选地,将连续时刻的超声波传感器一相邻的N个超声波传感器的特征值以及超声波传感器一的label生成训练样本。
通过在障碍物测试场景中进行测试,可以自动获得针对各超声波传感器的大量的测试样本,无需人工通过激光测距仪等设备进行手工标注,大大提高了测试样本的获取速度和精确度。
步骤S24、根据所述训练样本对神经网络模型进行训练。
在本实施例的一种优选实现方式中,所述神经网络模型为CNN卷积神经网络模型。利用带有标注的训练样本,包括正样本和负样本对CNN卷积神经网络模型进行训练。
优选地,利用SGD(Stochastic Gradient Descent,随机梯度下降)训练算法对多个样本对进行训练,最终得到CNN模型参数。
优选地,对超声波阵列中不同的超声波传感器,分别将其相邻的N个超声波传感器的特征值以及其label生成训练样本对CNN卷积神经网络模型进行训练。在超声波阵列障碍物检测结果处理方法中,对不同超声波传感器的待校正的障碍物检测结果,采用其对应的训练后的CNN卷积神经网络模型进行标注,对应的label标注结果即为其障碍物检测结果的标注,若为真,则根据其采集的障碍物坐标进行避障;若预先训练的神经网络输出的标识结果为假,则抛弃该采集结果,以新的采集结果为准进行避障。
在本实施例的另一种优选实现方式中,所述神经网络模型为LSTM长短期记忆神经网络模型。利用连续时刻获得的多个带有标注的训练样本对LSTM长短期记忆神经网络模型进行训练。
利用训练样本对LSTM长短期记忆神经网络模型进行训练,通过使目标函数最优化(即,最大化或最小化),从参数的初始值确定LSTM长短期记忆神经网络的参数的训练值。
采用上述实施例提供的技术方案,采用激光雷达返回的结果作为超声波阵列检测的真值对超声波阵列障碍物检测结果进行标注,利用标注后的障碍物检测结果作为训练样本对神经网络模型进行训练,根据训练后的神经网络模型对超声波障碍物检测结果进行校正,提高超声波阵列 障碍物检测准确性,避免误检和漏检,提高行驶安全性。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
以上是关于方法实施例的介绍,以下通过装置实施例,对本发明所述方案进行进一步说明。
图3为本申请一实施例提供的超声波阵列障碍物检测结果校正系统的结构示意图,如图3所示,包括:
获取模块31,用于获取超声波传感器阵列中各超声波传感器采集的障碍物坐标;
神经网络标识模块32,用于将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到所述预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识;
处理模块33,根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理。
优选地,在获取模块31的一种优选实现方式中,
获取模块31获取超声波传感器阵列中各超声波传感器采集的障碍物坐标。
优选地,在本实施例中,仅考虑在超声波传感器阵列的视野中存在 单个障碍物的情况,场景比较干净,以便各超声波传感器所返回的为针对同一个障碍物所采集的障碍物坐标。
优选地,由于所述超声波传感器阵列中各个超声波传感器与车辆前方保险杠上均匀分布,因此各超声波传感器的坐标系是不同的,需要将各超声波传感器采集的障碍物坐标统一到参考坐标系中。本实施例中,可以将各超声波传感器的坐标转换到车辆坐标系中。
在神经网络标识模块32的一种优选实现方式中,
所述神经网络标识模块32将将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到所述预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识。
优选地,所述超声波传感器阵列包括10个均匀分布在车辆前方保险杠上的超声波传感器,在实际应用当中,一般最多会有4-6个相邻的超声波传感器可以采集到同一障碍物坐标。例如,当障碍物出现在车辆左前方,可能只有分布在车辆前方保险杠左侧的4个超声波传感器采集到了所述障碍物的坐标,返回了障碍物信息,而分布在车辆前方保险杠中央和右侧的6个超声波传感器则未采集到障碍物信息。
在本实施例的一个优选实施例中,
从各超声波传感器采集的障碍物坐标中选择一个超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标生成数据序列,其中,N为小于超声波传感器阵列总数的正整数,将所述数据序列输入预先训练的神经网络;得到预先训练的神经网络输出的标识结果,确定所选的一个超声波传感器采集的障碍物坐标的真假。所述预先训练的神经网络 为CNN卷积神经网络。
在本实施例的另一个优选实施例中,
从各超声波传感器采集的障碍物坐标中选择一个超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标生成数据序列,其中,N为小于超声波传感器阵列总数的正整数,将第t时刻所述数据序列输入预先训练的神经网络;得到预先训练的神经网络输出的第t时刻的标识结果,确定所选的一个超声波传感器采集的障碍物坐标的真假。所述预先训练的神经网络为LSTM长短期记忆神经网络。优选地,将第t时刻的数据序列代入LSTM长短期记忆神经网络以计算出该第t时刻的LSTM结果,确定所选的一个超声波传感器采集的障碍物坐标的真假。优选地,在LSTM长短期记忆神经网络接收到第t时刻的数据序列的同时,此时已经存有第t-1时刻至第t-n时刻的LSTM隐含层状态,可以根据预设规则从该第t-1时刻至第t-n时刻的LSTM隐含层状态中选择满足该预设规则的LSTM隐含层状态作为选择结果,以用于计算第t时刻的LSTM结果。
其中,在本发明的实施例中,该预设规则可包括但不限于:从第t-1时刻至第t-n时刻的LSTM隐含层状态中选择出至少一个最大不同的LSTM隐含层状态作为选择结果;和/或,采用L0-范数的稀疏性,从第t-1时刻至第t-n时刻的LSTM隐含层状态中选择出至少一个LSTM隐含层状态;和/或,根据人工经验从第t-1时刻至第t-n时刻的LSTM隐含层状态中选择出至少一个LSTM隐含层状态。可以理解,上述预设规则还可以根据实际需求(如新的任务)设计相应的选择规则。
在处理模块33的一种优选实现方式中,
所述处理模块33用于根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理。
优选地,若预先训练的神经网络输出的标识结果为真,则保留待处理的超声波传感器采集的障碍物坐标;若预先训练的神经网络输出的标识结果为假,则抛弃待处理的超声波传感器采集的障碍物坐标。
优选地,根据处理后的障碍物坐标进行避障。
优选地,可以同时采用CNN卷积神经网络和LSTM长短期记忆神经网络分别输出标识结果,并进行加权求和,以确定最终的标识结果。
所述系统还包括训练模块(图中未示出),用于训练所述神经网络模型,所述训练模块包括以下子模块:
构建子模块41,用于构造障碍物测试场景;
优选地,所述障碍物测试场景为简单场景,比较干净,在超声波阵列的视野中,只设置单个障碍物,以便对所述超声波阵列进行测试的过程中,各超声波传感器所返回的为同一个障碍物的信息;
采集子模块43,用于采集超声波传感器阵列采集的障碍物的坐标以及激光雷达对相同障碍物采集的坐标并统一到参考坐标系中;
优选地,所述超声波传感器阵列中各个超声波传感器与车辆前方保险杠上均匀分布;所述激光雷达为单线激光雷达,安装于车辆前方保险杠的中央位置,所述激光雷达用于采集障碍物的精确距离数据,以便作为超声波传感器采集的障碍物坐标的真值,对其进行标注。
优选地,由于所述超声波传感器阵列中各个超声波传感器与车辆前方保险杠上均匀分布,激光雷达安装于车辆前方保险杠的中央位置,因此需要将各超声波传感器坐标系中的坐标与激光雷达坐标系中的坐标转 换到参考坐标系中。本实施例中,可以将激光雷达坐标系中的坐标、超声波雷达坐标系中的坐标统一转换到车辆坐标系中。
所述各超声波传感器与激光雷达的初始空间配置是事先已知的,可以根据所述各超声波传感器与激光雷达在无人驾驶车辆车体上的测量数据得到。将障碍物在各超声波传感器坐标系中的坐标与在激光雷达坐标系中的坐标转换到车辆坐标系中。
生成子模块43,用于根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行标注,生成训练样本;
优选地,根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行标注,标注其采集的障碍物坐标的label为真或为假。例如,
对于超声波阵列中的超声波传感器一,若激光雷达返回的距离数据与超声波传感器一返回的距离数据的误差在预设阈值范围内,则标注为真,若大于预设阈值范围,则标注为假,认为所述超声波传感器一出现了误检或漏检。优选地,所述预设阈值范围为20cm,可以覆盖大约99%的输出。
确定超声波传感器一相邻的N个超声波传感器采集的障碍物坐标的特征值。优选地,N为小于超声波阵列的超声波传感器总数的正整数。优选地,所述超声波传感器阵列包括10个均匀分布在车辆前方保险杠上的超声波传感器,在实际应用当中,一般最多会有4-6个相邻的超声波传感器可以采集到同一障碍物坐标。例如,当障碍物出现在车辆左前方,可能只有分布在车辆前方保险杠左侧的4个超声波传感器采集到了所述障碍物的坐标,返回了障碍物信息,而分布在车辆前方保险杠中央和右 侧的6个超声波传感器则未采集到障碍物信息。因此,N可以选择为4个,虽然N可以取更大的数值,但是,更多的传感器所采集的障碍物坐标与传感器一所采集的障碍物坐标的相关程度较低,只会增加运算量,因此,一般选择为4个即可。
选择超声波传感器一相邻的N个超声波传感器采集的障碍物坐标,通过对所述障碍物坐标进行特征变化,如取平方、开平方等获取其特征值。
优选地,将超声波传感器一相邻的N个超声波传感器的特征值以及超声波传感器一的label生成训练样本。
其中,label标注为真的为正样本,label标注为假的为负样本。
优选地,将连续时刻的超声波传感器一相邻的N个超声波传感器的特征值以及超声波传感器一的label生成训练样本。
通过在障碍物测试场景中进行测试,可以自动获得针对各超声波传感器的大量的测试样本,无需人工通过激光测距仪等设备进行手工标注,大大提高了测试样本的获取速度和精确度。
训练子模块44,用于根据所述训练样本对神经网络模型进行训练。
在本实施例的一种优选实现方式中,所述神经网络模型为CNN卷积神经网络模型。利用带有标注的训练样本,包括正样本和负样本对CNN卷积神经网络模型进行训练。
优选地,利用SGD(Stochastic Gradient Descent,随机梯度下降)训练算法对多个样本对进行训练,最终得到CNN模型参数。
优选地,对超声波阵列中不同的超声波传感器,分别将其相邻的N个超声波传感器的特征值以及其label生成训练样本对CNN卷积神经网 络模型进行训练。在超声波阵列障碍物检测结果处理方法中,对不同超声波传感器的待校正的障碍物检测结果,采用其对应的训练后的CNN卷积神经网络模型进行标注,对应的label标注结果即为其障碍物检测结果的标注,若为真,则根据其采集的障碍物坐标进行避障;若预先训练的神经网络输出的标识结果为假,则抛弃该采集结果,以新的采集结果为准进行避障。
在本实施例的另一种优选实现方式中,所述神经网络模型为LSTM长短期记忆神经网络模型。利用连续时刻获得的多个带有标注的训练样本对LSTM长短期记忆神经网络模型进行训练。
利用训练样本对LSTM长短期记忆神经网络模型进行训练,通过使目标函数最优化(即,最大化或最小化),从参数的初始值确定LSTM长短期记忆神经网络的参数的训练值。
采用上述实施例提供的技术方案,采用激光雷达返回的结果作为超声波阵列检测的真值对超声波阵列障碍物检测结果进行标注,利用标注后的障碍物检测结果作为训练样本对神经网络模型进行训练,根据训练后的神经网络模型对超声波障碍物检测结果进行校正,提高超声波阵列障碍物检测准确性,避免误检和漏检,提高行驶安全性。
在所述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集 成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。所述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
图5示出了适于用来实现本发明实施方式的示例性计算机系统/服务器012的框图。图5显示的计算机系统/服务器012仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图5所示,计算机系统/服务器012以通用计算设备的形式表现。计算机系统/服务器012的组件可以包括但不限于:一个或者多个处理器或者处理单元016,系统存储器028,连接不同系统组件(包括系统存储器028和处理单元016)的总线018。
总线018表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互 连(PCI)总线。
计算机系统/服务器012典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机系统/服务器012访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器028可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)030和/或高速缓存存储器032。计算机系统/服务器012可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统034可以用于读写不可移动的、非易失性磁介质(图5未显示,通常称为“硬盘驱动器”)。尽管图5中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线018相连。存储器028可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块042的程序/实用工具040,可以存储在例如存储器028中,这样的程序模块042包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块042通常执行本发明所描述的实施例中的功能和/或方法。
计算机系统/服务器012也可以与一个或多个外部设备014(例如键盘、指向设备、显示器024等)通信,在本发明中,计算机系统/服务器012与外部雷达设备进行通信,还可与一个或者多个使得用户能与该计 算机系统/服务器012交互的设备通信,和/或与使得该计算机系统/服务器012能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口022进行。并且,计算机系统/服务器012还可以通过网络适配器020与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图5所示,网络适配器020通过总线018与计算机系统/服务器012的其它模块通信。应当明白,尽管图5中未示出,可以结合计算机系统/服务器012使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元016通过运行存储在系统存储器028中的程序,从而执行本发明所描述的实施例中的功能和/或方法。
上述的计算机程序可以设置于计算机存储介质中,即该计算机存储介质被编码有计算机程序,该程序在被一个或多个计算机执行时,使得一个或多个计算机执行本发明上述实施例中所示的方法流程和/或装置操作。
随着时间、技术的发展,介质含义越来越广泛,计算机程序的传播途径不再受限于有形介质,还可以直接从网络下载等。可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、 硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过 因特网连接)。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。

Claims (18)

  1. 一种超声波阵列障碍物检测结果处理方法,其特征在于,包括:
    获取超声波传感器阵列中各超声波传感器采集的障碍物坐标;
    将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到所述预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识;
    根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理。
  2. 根据权利要求1所述的方法,其特征在于,将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型包括:
    选择待处理的超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标的特征值生成数据序列,其中,N为小于超声波传感器阵列总数的正整数;
    将所述数据序列输入预先训练的神经网络模型。
  3. 根据权利要求1所述的方法,其特征在于,所述神经网络模型通过以下方法进行训练:
    构造障碍物测试场景;
    获取超声波传感器阵列中各超声波传感器采集的障碍物的坐标以及激光雷达对相同障碍物采集的坐标,并统一到参考坐标系中;
    根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行真假标注,生成训练样本;
    根据所述训练样本对神经网络模型进行训练。
  4. 根据权利要求3所述的方法,其特征在于,根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行真假标注包括:
    若激光雷达采集的障碍物坐标与超声波传感器采集的障碍物坐标的误差在预设阈值范围内,则标注为真,若大于预设阈值范围,则标注为假。
  5. 根据权利要求3所述的方法,其特征在于,所述生成训练样本包括:
    将待处理的超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标的特征值以及针对待处理的超声波传感器采集的障碍物坐标的真假标注生成训练样本,其中,N为小于超声波传感器阵列总数的正整数。
  6. 根据权利要求1~4中任一项所述的方法,其特征在于,所述神经网络模型为卷积神经网络模型。
  7. 根据权利要求1~4中任一项所述的方法,其特征在于,所述神经网络模型为长短期记忆神经网络模型。
  8. 根据权利要求2所述的方法,其特征在于,根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理包括:
    若所述神经网络模型输出的标识结果为真,则保留所述待处理的超声波传感器采集的障碍物坐标;
    若所述神经网络模型输出的标识结果为假,则丢弃所述待处理的超声波传感器采集的障碍物坐标。
  9. 一种超声波阵列障碍物检测结果处理系统,其特征在于,包括:
    获取模块,用于获取超声波传感器阵列中各超声波传感器采集的障碍物坐标;
    神经网络标识模块,用于将超声波传感器阵列中各超声波传感器采集的障碍物坐标输入预先训练的神经网络模型,得到所述预先训练的神经网络模型输出的针对各超声波传感器采集的障碍物坐标的真假标识;
    处理模块,根据所述真假标识对超声波传感器阵列中各超声波传感器采集的障碍物坐标进行处理。
  10. 根据权利要求9所述的系统,其特征在于,所述神经网络标识模块具体用于:
    选择待处理的超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标的特征值生成数据序列,其中,N为小于超声波传感器阵列总数的正整数;
    将所述数据序列输入预先训练的神经网络模型。
  11. 根据权利要求9所述的系统,其特征在于,所述系统还包括神经网络训练模块,用于:
    构造障碍物测试场景;
    获取超声波传感器阵列中各超声波传感器采集的障碍物的坐标以及激光雷达对相同障碍物采集的坐标,并统一到参考坐标系中;
    根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集的障碍物坐标进行真假标注,生成训练样本;
    根据所述训练样本对神经网络模型进行训练。
  12. 根据权利要求11所述的系统,其特征在于,所述神经网络训练模块在根据统一后的激光雷达采集的障碍物坐标对各超声波传感器采集 的障碍物坐标进行真假标注时,具体执行:
    若激光雷达采集的障碍物坐标与超声波传感器采集的障碍物坐标的误差在预设阈值范围内,则标注为真,若大于预设阈值范围,则标注为假。
  13. 根据权利要求11所述的系统,其特征在于,所述神经网络训练模块在生成训练样本时,具体执行:
    将待处理的超声波传感器以及与其相邻的N个超声波传感器采集的障碍物坐标的特征值以及针对待处理的超声波传感器采集的障碍物坐标的真假标注生成训练样本,其中,N为小于超声波传感器阵列总数的正整数。
  14. 根据权利要求9~12中任一项所述的系统,其特征在于,所述神经网络模型为卷积神经网络模型。
  15. 根据权利要求9~12中任一项所述的系统,其特征在于,所述神经网络模型为长短期记忆神经网络模型。
  16. 根据权利要求10所述的系统,其特征在于,所述处理模块,具体用于:
    若所述神经网络模型输出的标识结果为真,则保留所述待处理的超声波传感器采集的障碍物坐标;
    若所述神经网络模型输出的标识结果为假,则丢弃所述待处理的超声波传感器采集的障碍物坐标。
  17. 一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1~8中任一项所述的方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现如权利要求1~8中任一项所述的方法。
PCT/CN2019/126392 2019-01-15 2019-12-18 超声波阵列障碍物检测结果处理方法及系统 WO2020147500A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP19910456.3A EP3839564B1 (en) 2019-01-15 2019-12-18 Ultrasonic array-based obstacle detection result processing method and system
US17/278,248 US11933921B2 (en) 2019-01-15 2019-12-18 Method and system for processing obstacle detection result of ultrasonic sensor array
JP2021518846A JP7185811B2 (ja) 2019-01-15 2019-12-18 超音波アレイによる障害物検出結果の処理方法、コンピュータデバイス、記憶媒体、プログラム及びシステム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910034451.XA CN109870698B (zh) 2019-01-15 2019-01-15 一种超声波阵列障碍物检测结果处理方法及系统
CN201910034451.X 2019-01-15

Publications (1)

Publication Number Publication Date
WO2020147500A1 true WO2020147500A1 (zh) 2020-07-23

Family

ID=66917621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126392 WO2020147500A1 (zh) 2019-01-15 2019-12-18 超声波阵列障碍物检测结果处理方法及系统

Country Status (5)

Country Link
US (1) US11933921B2 (zh)
EP (1) EP3839564B1 (zh)
JP (1) JP7185811B2 (zh)
CN (1) CN109870698B (zh)
WO (1) WO2020147500A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419627A (zh) * 2021-06-18 2021-09-21 Oppo广东移动通信有限公司 设备控制方法、装置及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109870698B (zh) * 2019-01-15 2021-12-24 阿波罗智能技术(北京)有限公司 一种超声波阵列障碍物检测结果处理方法及系统
CN110333517B (zh) * 2019-07-11 2022-11-25 腾讯科技(深圳)有限公司 障碍物感知方法、装置及存储介质
CN110674853A (zh) * 2019-09-09 2020-01-10 广州小鹏汽车科技有限公司 超声波数据处理方法、装置及车辆
CN111273268B (zh) * 2020-01-19 2022-07-19 北京百度网讯科技有限公司 自动驾驶障碍物类型的识别方法、装置及电子设备
CN112180357A (zh) * 2020-09-15 2021-01-05 珠海格力电器股份有限公司 一种安全防护的方法及系统
CN112462353B (zh) * 2020-11-26 2023-08-15 中国第一汽车股份有限公司 超声波雷达的测试方法、装置、系统及设备
CN115356400B (zh) * 2022-08-11 2024-07-26 苏州大学 一种基于BiLSTM的超声波木材无损检测方法及系统
JP2024035280A (ja) 2022-09-02 2024-03-14 フォルシアクラリオン・エレクトロニクス株式会社 物体検出装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105922990A (zh) * 2016-05-26 2016-09-07 广州大学 一种基于云端机器学习的车辆环境感知和控制方法
WO2018127498A1 (en) * 2017-01-05 2018-07-12 Koninklijke Philips N.V. Ultrasound imaging system with a neural network for image formation and tissue characterization
CN108845324A (zh) * 2018-06-26 2018-11-20 北京小米移动软件有限公司 障碍物识别方法、装置、设备及存储介质
CN109116374A (zh) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 确定障碍物距离的方法、装置、设备及存储介质
CN109870698A (zh) * 2019-01-15 2019-06-11 北京百度网讯科技有限公司 一种超声波阵列障碍物检测结果处理方法及系统

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2878414B2 (ja) * 1989-09-04 1999-04-05 株式会社リコー 3次元物体認識方式
JP3151472B2 (ja) * 1992-12-08 2001-04-03 日本電信電話株式会社 3次元物体像の生成方法
US7630806B2 (en) * 1994-05-23 2009-12-08 Automotive Technologies International, Inc. System and method for detecting and protecting pedestrians
US7209221B2 (en) 1994-05-23 2007-04-24 Automotive Technologies International, Inc. Method for obtaining and displaying information about objects in a vehicular blind spot
US7426437B2 (en) * 1997-10-22 2008-09-16 Intelligent Technologies International, Inc. Accident avoidance systems and methods
JP3188391B2 (ja) 1996-01-16 2001-07-16 矢崎総業株式会社 ターミナルキャップ
JP2000098031A (ja) 1998-09-22 2000-04-07 Hitachi Ltd インパルスソーナー
DE102005009702A1 (de) * 2005-03-03 2006-09-07 Robert Bosch Gmbh Abstandsmessvorrichtung und Verfahren zur Funktionsprüfung einer Abstandsmessung
DE102011013681A1 (de) * 2011-03-11 2012-09-13 Valeo Schalter Und Sensoren Gmbh Verfahren zum Detektieren einer Parklücke, Parkhilfesystem und Kraftfahrzeug mit einem Parkhilfesystem
US9091613B2 (en) * 2012-06-27 2015-07-28 General Monitors, Inc. Multi-spectral ultrasonic gas leak detector
JP6576624B2 (ja) 2014-09-24 2019-09-18 五洋建設株式会社 水中測位システム及び水中測位方法
CN105303179A (zh) * 2015-10-28 2016-02-03 小米科技有限责任公司 指纹识别方法、装置
CN105738908B (zh) * 2016-01-29 2018-08-24 宇龙计算机通信科技(深圳)有限公司 一种防碰撞预警方法、装置与耳机
US10198822B2 (en) 2016-10-27 2019-02-05 International Business Machines Corporation Systems and user interfaces for determination of electro magnetically identified lesions as included in medical images of differing perspectives
US10198655B2 (en) * 2017-01-24 2019-02-05 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
DE102017108348B3 (de) * 2017-04-20 2018-06-21 Valeo Schalter Und Sensoren Gmbh Konfiguration eines Sensorsystems mit einem neuronalen Netzwerk für ein Kraftfahrzeug
US10884409B2 (en) * 2017-05-01 2021-01-05 Mentor Graphics (Deutschland) Gmbh Training of machine learning sensor data classification system
CN107526085B (zh) * 2017-07-25 2020-07-14 福建网龙计算机网络信息技术有限公司 超声波阵列测距建模的方法及其系统
US10551838B2 (en) * 2017-08-08 2020-02-04 Nio Usa, Inc. Method and system for multiple sensor correlation diagnostic and sensor fusion/DNN monitor for autonomous driving application
US11644834B2 (en) * 2017-11-10 2023-05-09 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
CN107942335A (zh) * 2017-11-28 2018-04-20 达闼科技(北京)有限公司 一种物体识别方法及设备
CN108909624B (zh) * 2018-05-13 2021-05-18 西北工业大学 一种基于单目视觉的实时障碍物检测和定位方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105922990A (zh) * 2016-05-26 2016-09-07 广州大学 一种基于云端机器学习的车辆环境感知和控制方法
WO2018127498A1 (en) * 2017-01-05 2018-07-12 Koninklijke Philips N.V. Ultrasound imaging system with a neural network for image formation and tissue characterization
CN109116374A (zh) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 确定障碍物距离的方法、装置、设备及存储介质
CN108845324A (zh) * 2018-06-26 2018-11-20 北京小米移动软件有限公司 障碍物识别方法、装置、设备及存储介质
CN109870698A (zh) * 2019-01-15 2019-06-11 北京百度网讯科技有限公司 一种超声波阵列障碍物检测结果处理方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3839564A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419627A (zh) * 2021-06-18 2021-09-21 Oppo广东移动通信有限公司 设备控制方法、装置及存储介质

Also Published As

Publication number Publication date
EP3839564A4 (en) 2021-12-22
US20220035015A1 (en) 2022-02-03
JP7185811B2 (ja) 2022-12-08
JP2021527833A (ja) 2021-10-14
CN109870698A (zh) 2019-06-11
EP3839564A1 (en) 2021-06-23
CN109870698B (zh) 2021-12-24
US11933921B2 (en) 2024-03-19
EP3839564B1 (en) 2023-02-15

Similar Documents

Publication Publication Date Title
WO2020147500A1 (zh) 超声波阵列障碍物检测结果处理方法及系统
JP6893233B2 (ja) 画像に基づくデータ処理方法、装置、電子機器、コンピュータ可読記憶媒体およびコンピュータプログラム
US11783590B2 (en) Method, apparatus, device and medium for classifying driving scenario data
CN109343061B (zh) 传感器标定方法、装置、计算机设备、介质和车辆
CN109345596B (zh) 多传感器标定方法、装置、计算机设备、介质和车辆
US10698106B2 (en) Obstacle detecting method and apparatus, device and storage medium
US10902300B2 (en) Method and apparatus for training fine-grained image recognition model, fine-grained image recognition method and apparatus, and storage mediums
CN109145680B (zh) 一种获取障碍物信息的方法、装置、设备和计算机存储介质
CN106845412B (zh) 障碍物识别方法及装置、计算机设备及可读介质
JP2023055697A (ja) 自動運転テスト方法、装置、電子機器及び記憶媒体
CN113642431B (zh) 目标检测模型的训练方法及装置、电子设备和存储介质
CN109558854B (zh) 障碍物感知方法、装置、电子设备及存储介质
CN109635868B (zh) 障碍物类别的确定方法、装置、电子设备及存储介质
CN115797736B (zh) 目标检测模型的训练和目标检测方法、装置、设备和介质
CN112834249B (zh) 转向参数检测方法、装置、设备及存储介质
CN104881673A (zh) 基于信息整合的模式识别的方法和系统
CN110097121A (zh) 一种行驶轨迹的分类方法、装置、电子设备及存储介质
JP2023038164A (ja) 障害物検出方法、装置、自動運転車両、機器、及び記憶媒体
CN114186007A (zh) 高精地图生成方法、装置、电子设备和存储介质
CN114091515A (zh) 障碍物检测方法、装置、电子设备和存储介质
CN110850982A (zh) 基于ar的人机交互学习方法、系统、设备及存储介质
CN112418316B (zh) 机器人重定位方法、装置、激光机器人及可读存储介质
CN117953581A (zh) 动作识别的方法、装置、电子设备及可读存储介质
CN114429631B (zh) 三维对象检测方法、装置、设备以及存储介质
CN115527083A (zh) 图像标注方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19910456

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021518846

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019910456

Country of ref document: EP

Effective date: 20210318

NENP Non-entry into the national phase

Ref country code: DE