CN109116374A - Determine the method, apparatus, equipment and storage medium of obstacle distance - Google Patents
Determine the method, apparatus, equipment and storage medium of obstacle distance Download PDFInfo
- Publication number
- CN109116374A CN109116374A CN201710488088.XA CN201710488088A CN109116374A CN 109116374 A CN109116374 A CN 109116374A CN 201710488088 A CN201710488088 A CN 201710488088A CN 109116374 A CN109116374 A CN 109116374A
- Authority
- CN
- China
- Prior art keywords
- barrier
- training
- image
- semantic information
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses the method, apparatus, equipment and the storage mediums that determine obstacle distance, and wherein method includes: the mode based on deep learning, obtains detection of obstacles model;Visual sensor acquired image is obtained, inputs to detection of obstacles model, the three-dimensional semantic information of the barrier in image exported;Determine barrier at a distance from automatic driving vehicle according to the three-dimensional semantic information of barrier.Using scheme of the present invention, reliability and accuracy of result etc. can be improved.
Description
[technical field]
The present invention relates to Computer Applied Technologies, in particular to determine method, apparatus, equipment and the storage of obstacle distance
Medium.
[background technique]
Automatic driving vehicle refers to through various sensors and perceives vehicle-periphery, and obtained according to perceiving
Road, vehicle location and obstacle information etc., control the steering and speed of vehicle, to enable the vehicle to reliably and securely exist
It is travelled on road.
In the driving process of automatic driving vehicle, need constantly to carry out detection of obstacles and ranging etc., to take
The measures such as corresponding avoidance.
Ranging, which refers to, determines the distance between barrier and automatic driving vehicle, in the prior art, main usually following two
Kind of mode determines the distance of barrier.
1) mode one
Determine the distance of barrier by the way of Multi-sensor Fusion, the multisensor may include range sensor with
And visual sensor, wherein range sensor may include millimetre-wave radar, laser radar, ultrasonic radar etc., visual sensor
It may include camera etc..
But this mode can be related to the problem of calibrating between multisensor, realize that complexity, reliability are lower.
2) mode two
Based on the algorithm of two-dimentional (2D) image object detection, the distance of barrier is determined.
But this mode lacks three-dimensional spatial information, and precision is lower, and obstacle distance is remoter, and error is bigger.
[summary of the invention]
In view of this, the present invention provides the method, apparatus, equipment and the storage medium that determine obstacle distance, Neng Gouti
The reliability and accuracy of high result.
Specific technical solution is as follows:
A kind of method of determining obstacle distance, comprising:
Mode based on deep learning obtains detection of obstacles model;
Visual sensor acquired image is obtained, inputs to the detection of obstacles model, the figure exported
The three-dimensional semantic information of barrier as in;
Determine barrier at a distance from automatic driving vehicle according to the three-dimensional semantic information of the barrier.
According to one preferred embodiment of the present invention, the mode based on deep learning, obtaining detection of obstacles model includes:
Training sample is obtained, includes: the barrier in training image and the training image in each training sample
Three-dimensional semantic information;
The detection of obstacles model is obtained according to training sample training.
According to one preferred embodiment of the present invention, the three-dimensional semantic information of the barrier includes:
N number of key point, N are the positive integer greater than one;
The physics size of barrier;
Barrier towards angle.
According to one preferred embodiment of the present invention, the value of N is 8;
N number of key point is respectively to outline 8 vertex of the detection block of barrier.
According to one preferred embodiment of the present invention, the acquisition training sample includes:
Obtain the collecting vehicle each group training data collected for being equipped with visual sensor and laser radar, every group of training
It include: that the visual sensor image collected comprising barrier and the corresponding laser radar are acquired in data
Point cloud data;
For every group of training data, carry out the following processing respectively:
Using the image in the training data as training image;
The physics size of barrier is determined according to the point cloud data in the training data and towards angle;
N number of key point of the barrier manually marked based on the point cloud data is obtained, and projects to the training figure
As upper;
By the training image and project to N number of key point on the training image, the physics size of barrier and
Barrier towards angle information as a training sample.
According to one preferred embodiment of the present invention, described that the obstacle is determined according to the three-dimensional semantic information of the barrier
Object includes: at a distance from automatic driving vehicle
According to the three-dimensional semantic information of the barrier, barrier is transformed into three-dimensional space from two-dimensional space, according to turn
Changing result determines barrier at a distance from automatic driving vehicle.
A kind of device of determining obstacle distance, comprising: pretreatment unit and estimation unit;
The pretreatment unit obtains detection of obstacles model for the mode based on deep learning;
The estimation unit inputs to the detection of obstacles model for obtaining visual sensor acquired image,
The three-dimensional semantic information of the barrier in described image exported, and determined according to the three-dimensional semantic information of the barrier
Barrier is at a distance from automatic driving vehicle out.
It according to one preferred embodiment of the present invention, include: sample acquisition subelement and model instruction in the pretreatment unit
Practice subelement;
The sample acquisition subelement includes: training image and institute for obtaining training sample, in each training sample
State the three-dimensional semantic information of the barrier in training image;
The model training subelement, for obtaining the detection of obstacles model according to training sample training.
According to one preferred embodiment of the present invention, the three-dimensional semantic information of the barrier includes:
N number of key point, N are the positive integer greater than one;
The physics size of barrier;
Barrier towards angle.
According to one preferred embodiment of the present invention, the value of N is 8;
N number of key point is respectively to outline 8 vertex of the detection block of barrier.
According to one preferred embodiment of the present invention, the sample acquisition subelement acquisition is equipped with visual sensor and laser
The collecting vehicle of radar each group training data collected includes: visual sensor packet collected in every group of training data
Image and corresponding laser radar point cloud data collected containing barrier;
For every group of training data, carry out the following processing respectively:
Using the image in the training data as training image;
The physics size of barrier is determined according to the point cloud data in the training data and towards angle;
N number of key point of the barrier manually marked based on the point cloud data is obtained, and projects to the training figure
As upper;
By the training image and project to N number of key point on the training image, the physics size of barrier and
Barrier towards angle information as a training sample.
According to one preferred embodiment of the present invention, the estimation unit will hinder according to the three-dimensional semantic information of the barrier
Hinder object to be transformed into three-dimensional space from two-dimensional space, determines barrier at a distance from automatic driving vehicle according to transformation result.
A kind of computer equipment, including memory, processor and be stored on the memory and can be in the processor
The computer program of upper operation, the processor realize method as described above when executing described program.
A kind of computer readable storage medium is stored thereon with computer program, real when described program is executed by processor
Now method as described above.
It can be seen that based on above-mentioned introduction using scheme of the present invention, the mode of deep learning can be in advance based on, obtained
Detection of obstacles model, in this way, detection of obstacles mould can be inputted to after the image for getting visual sensor acquisition
Type, to obtain the three-dimensional semantic information of the barrier in the image of detection of obstacles model output, and then according to barrier
Three-dimensional semantic information determines barrier at a distance from automatic driving vehicle, compared to existing way one, due to not being related to pass more
Sensor fusion, therefore reliability has obtained very big promotion, and also reduces cost of implementation, in addition, compared to existing way two,
The three-dimensional semantic information of barrier is provided, to improve the accuracy of result.
[Detailed description of the invention]
Fig. 1 is the flow chart of the method first embodiment of determining obstacle distance of the present invention.
Fig. 2 is the flow chart of the method second embodiment of determining obstacle distance of the present invention.
Fig. 3 is the composed structure schematic diagram of the Installation practice of determining obstacle distance of the present invention.
Fig. 4 shows the block diagram for being suitable for the exemplary computer system/server 12 for being used to realize embodiment of the present invention.
[specific embodiment]
In order to be clearer and more clear technical solution of the present invention, hereinafter, referring to the drawings and the embodiments, to institute of the present invention
The scheme of stating is further described.
Obviously, described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on the present invention
In embodiment, those skilled in the art's all other embodiment obtained without creative efforts, all
Belong to the scope of protection of the invention.
Fig. 1 is the flow chart of the method first embodiment of determining obstacle distance of the present invention, as shown in Figure 1, including
Implementation in detail below.
In 101, the mode based on deep learning obtains detection of obstacles model.
To obtain detection of obstacles model, need to obtain training sample first, in each training sample can include: training figure
Three-dimensional (3D) semantic information of picture and the barrier in training image can obtain obstacle quality testing according to training sample training later
Survey model.
The three-dimensional semantic information of barrier can include: N number of key point, the physics size of barrier and the direction of barrier
Angle etc..
Wherein, N is positive integer greater than one, and specific value can be determined according to actual needs, preferably, N can value be 8,
Correspondingly, N number of key point may respectively be 8 vertex of the detection block for outlining barrier, that is, the detection block for outlining barrier has altogether
Standby 8 vertex, this 8 vertex are key point.Detection block refers to the frame for covering/encasing barrier.
The physics size of barrier can refer to the length etc. of barrier.
In practical applications, it can use collecting vehicle to acquire training data, to generate training sample according to training data
This.
For example, mountable in collecting vehicle have visual sensor and laser radar etc., the two is synchronous to carry out data acquisition, adopts
Collect multiple groups training data, particular number can be determined according to actual needs.
In every group of training data can include: the visual sensor image collected comprising barrier and corresponding laser
Radar point cloud data collected.
For every group of training data, can carry out the following processing respectively:
A, using the image in this group of training data as training image;
B, the physics size of barrier determined according to the point cloud data in this group of training data and towards angle, specifically
It is embodied as the prior art;
C, N number of key point of the barrier manually marked based on point cloud data is obtained, and is projected on training image, i.e.,
It can be by being manually labeled according to N number of key point of the point cloud data to barrier, and it can be after the completion of mark, by the key of mark
Point projects on two-dimensional training image;
D, by training image and N number of key point, the physics size of barrier and barrier on training image are projected to
Towards angle information as a training sample.
In the manner described above, multiple training samples can be obtained, after getting sufficient amount of training sample, Ji Kegen
It trains to obtain detection of obstacles model according to these training samples.
Detection of obstacles model can be neural network model etc..
In 102, visual sensor acquired image is obtained, inputs to detection of obstacles model, the figure exported
The three-dimensional semantic information of barrier as in.
Training obtain detection of obstacles model after, can be determined based on detection of obstacles model barrier away from
From.
For example, in the driving process of automatic driving vehicle each acquired image of visual sensor can be directed to, respectively
It is inputted to detection of obstacles model, so that the three-dimensional for obtaining the barrier in the image of detection of obstacles model output is semantic
Information.
As previously mentioned, output barrier three-dimensional semantic information may include N number of key point, the physics size of barrier with
And barrier towards angle etc..
In 103, determine barrier at a distance from automatic driving vehicle according to the three-dimensional semantic information of barrier.
It, can be according to the three-dimensional semantic information of barrier, by barrier after getting the three-dimensional semantic information of barrier
Be transformed into three-dimensional space from two-dimensional space, be implemented as the prior art, later, can be determined according to transformation result barrier with
The distance between automatic driving vehicle.
For example, using the central point of automatic driving vehicle as the coordinate origin in three-dimensional space, then being converted by barrier
It, can be it can be easily ascertained that the distance between barrier and automatic driving vehicle after into three-dimensional space.
Based on above-mentioned introduction, Fig. 2 is the flow chart of the method second embodiment of determining obstacle distance of the present invention, such as
Shown in Fig. 2, including implementation in detail below.
In 201, the training data of collecting vehicle acquisition is obtained, training sample is generated according to training data.
In each training sample can include: the three-dimensional semantic information of the barrier in training image and training image.
The three-dimensional semantic information of barrier can include: N number of key point, the physics size of barrier and the direction of barrier
Angle etc..
In 202, detection of obstacles model is obtained according to training sample training.
After getting sufficient amount of training sample, detection of obstacles can be obtained according to the training of these training samples
Model, such as neural network model.
In 203, visual sensor acquired image is obtained, inputs to detection of obstacles model, the figure exported
The three-dimensional semantic information of barrier as in.
For example, in the driving process of automatic driving vehicle each acquired image of visual sensor can be directed to, respectively
It is inputted to detection of obstacles model, so that the three-dimensional for obtaining the barrier in the image of detection of obstacles model output is semantic
Information.
In 204, according to the three-dimensional semantic information of barrier, barrier is transformed into three-dimensional space, root from two-dimensional space
Determine barrier at a distance from automatic driving vehicle according to transformation result.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because
According to the application, certain steps can use other sequences or carry out simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, related actions and modules not necessarily the application
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiments.
In short, can be in advance based on the mode of deep learning using scheme described in the various embodiments described above, obtain detection of obstacles
Model, in this way, it can be inputted to detection of obstacles model after the image for getting visual sensor acquisition, thus
The three-dimensional semantic information of the barrier in the image of detection of obstacles model output is obtained, and then semantic according to the three-dimensional of barrier
Information determines barrier at a distance from automatic driving vehicle, compared to existing way one, due to not being related to Multi-sensor Fusion,
Therefore reliability has obtained very big promotion, and also reduces cost of implementation, in addition, providing barrier compared to existing way two
Hinder the three-dimensional semantic information of object, to improve the accuracy of result.
The introduction about embodiment of the method above, below by way of Installation practice, to scheme of the present invention carry out into
One step explanation.
Fig. 3 is the composed structure schematic diagram of the Installation practice of determining obstacle distance of the present invention, as shown in figure 3,
It include: pretreatment unit 301 and estimation unit 302.
Pretreatment unit 301 obtains detection of obstacles model for the mode based on deep learning.
Estimation unit 302 inputs to detection of obstacles model, obtains for obtaining visual sensor acquired image
The three-dimensional semantic information of barrier in the image of output, and barrier and nothing are determined according to the three-dimensional semantic information of barrier
The distance of people's driving vehicle.
To obtain detection of obstacles model, need to obtain training sample first, in each training sample can include: training figure
The three-dimensional semantic information of picture and the barrier in training image can obtain detection of obstacles mould according to training sample training later
Type.
Correspondingly, as shown in figure 3, may particularly include in pretreatment unit 301: sample acquisition subelement 3011 and model
Training subelement 3012.
Sample acquisition subelement 3011 includes: training image and instruction in each training sample for obtaining training sample
Practice the three-dimensional semantic information of the barrier in image.
Model training subelement 3012, for obtaining detection of obstacles model according to training sample training.
The three-dimensional semantic information of barrier can include: N number of key point, the physics size of barrier and the direction of barrier
Angle etc..
Wherein, N is positive integer greater than one, and specific value can be determined according to actual needs, preferably, N can value be 8,
Correspondingly, N number of key point may respectively be 8 vertex of the detection block for outlining barrier.
The physics size of barrier can refer to the length etc. of barrier.
In practical applications, it can use collecting vehicle to acquire training data, to generate training sample according to training data
This.
In this way, sample acquisition subelement 3011 can obtain and be equipped with the collecting vehicle of visual sensor and laser radar and be adopted
The each group training data of collection includes: the visual sensor image collected comprising barrier in every group of training data and right
The laser radar answered point cloud data collected.
Later, for every group of training data, sample acquisition subelement 3011 can carry out the following processing respectively:
Using the image in training data as training image;
The physics size of barrier is determined according to the point cloud data in training data and towards angle;
N number of key point of the barrier manually marked based on point cloud data is obtained, and is projected on training image;
By training image and project to N number of key point on training image, the physics size of barrier and barrier
Towards angle information as a training sample.
In the manner described above, multiple training samples can be obtained, after getting sufficient amount of training sample, Ji Keyou
Model training subelement 3012 obtains detection of obstacles model, such as neural network model to train according to these training samples.
After training detection of obstacles model, estimation unit 302 can be directed to visual sensor collected figure every time
Picture is inputted to detection of obstacles model respectively, to obtain the barrier in the image of detection of obstacles model output
Three-dimensional semantic information.
Further, estimation unit 302 can convert barrier from two-dimensional space according to the three-dimensional semantic information of barrier
The distance between barrier and automatic driving vehicle are determined to three-dimensional space, and then according to transformation result.
The specific workflow of Fig. 3 shown device embodiment please refers to the respective description in preceding method embodiment, no longer
It repeats.
As can be seen that can be in advance based on the mode of deep learning using scheme described in above-described embodiment, obtain obstacle quality testing
Model is surveyed, in this way, it can be inputted to detection of obstacles model after the image for getting visual sensor acquisition, from
And the three-dimensional semantic information of the barrier in the image of detection of obstacles model output is obtained, and then according to the three-dimensional language of barrier
Adopted information determines that barrier at a distance from automatic driving vehicle, compared to existing way one, melts due to not being related to multisensor
It closes, therefore reliability has obtained very big promotion, and also reduces cost of implementation, in addition, being provided compared to existing way two
The three-dimensional semantic information of barrier, to improve the accuracy of result.
Fig. 4 shows the block diagram for being suitable for the exemplary computer system/server 12 for being used to realize embodiment of the present invention.
The computer system/server 12 that Fig. 4 is shown is only an example, should not function and use scope to the embodiment of the present invention
Bring any restrictions.
As shown in figure 4, computer system/server 12 is showed in the form of universal computing device.Computer system/service
The component of device 12 can include but is not limited to: one or more processor (processing unit) 16, memory 28, connect not homology
The bus 18 of system component (including memory 28 and processor 16).
Bus 18 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts
For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC)
Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Computer system/server 12 typically comprises a variety of computer system readable media.These media, which can be, appoints
What usable medium that can be accessed by computer system/server 12, including volatile and non-volatile media, it is moveable and
Immovable medium.
Memory 28 may include the computer system readable media of form of volatile memory, such as random access memory
Device (RAM) 30 and/or cache memory 32.Computer system/server 12 may further include it is other it is removable/no
Movably, volatile/non-volatile computer system storage medium.Only as an example, storage system 34 can be used for reading and writing
Immovable, non-volatile magnetic media (Fig. 4 do not show, commonly referred to as " hard disk drive ").Although not shown in fig 4, may be used
To provide the disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk "), and it is non-volatile to moving
Property CD (such as CD-ROM, DVD-ROM or other optical mediums) read and write CD drive.In these cases, each drive
Dynamic device can be connected by one or more data media interfaces with bus 18.Memory 28 may include at least one program
Product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform the present invention
The function of each embodiment.
Program/utility 40 with one group of (at least one) program module 42 can store in such as memory 28
In, such program module 42 includes --- but being not limited to --- operating system, one or more application program, other programs
It may include the realization of network environment in module and program data, each of these examples or certain combination.Program mould
Block 42 usually executes function and/or method in embodiment described in the invention.
Computer system/server 12 can also be (such as keyboard, sensing equipment, aobvious with one or more external equipments 14
Show device 24 etc.) communication, it is logical that the equipment interacted with the computer system/server 12 can be also enabled a user to one or more
Letter, and/or with the computer system/server 12 any is set with what one or more of the other calculating equipment was communicated
Standby (such as network interface card, modem etc.) communicates.This communication can be carried out by input/output (I/O) interface 22.And
And computer system/server 12 can also pass through network adapter 20 and one or more network (such as local area network
(LAN), wide area network (WAN) and/or public network, such as internet) communication.As shown in figure 4, network adapter 20 passes through bus
18 communicate with other modules of computer system/server 12.It should be understood that although not shown in the drawings, computer can be combined
Systems/servers 12 use other hardware and/or software module, including but not limited to: microcode, device driver, at redundancy
Manage unit, external disk drive array, RAID system, tape drive and data backup storage system etc..
The program that processor 16 is stored in memory 28 by operation, at various function application and data
Reason, such as realize that the method in embodiment illustrated in fig. 1 obtains detection of obstacles model that is, based on the mode of deep learning, it obtains
Visual sensor acquired image inputs to detection of obstacles model, the three-dimensional language of the barrier in image exported
Adopted information determines barrier at a distance from automatic driving vehicle according to the three-dimensional semantic information of barrier.
Specific implementation please refers to the related description in foregoing embodiments, repeats no more.
The present invention discloses a kind of computer readable storage mediums, are stored thereon with computer program, the program quilt
Processor will realize the method in embodiment as shown in Figure 1 when executing.
It can be using any combination of one or more computer-readable media.Computer-readable medium can be calculating
Machine readable signal medium or computer readable storage medium.Computer readable storage medium for example can be --- but it is unlimited
In system, device or the device of --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or any above combination.It calculates
The more specific example (non exhaustive list) of machine readable storage medium storing program for executing includes: electrical connection with one or more conducting wires, just
Taking formula computer disk, hard disk, random access memory (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this document, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including --- but
It is not limited to --- electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be
Any computer-readable medium other than computer readable storage medium, which can send, propagate or
Transmission is for by the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited
In --- wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof
Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++,
Further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with
It fully executes, partly execute on the user computer on the user computer, being executed as an independent software package, portion
Divide and partially executes or executed on a remote computer or server completely on the remote computer on the user computer.?
Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including local area network (LAN) or
Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as mentioned using Internet service
It is connected for quotient by internet).
In several embodiments provided by the present invention, it should be understood that disclosed device and method etc. can pass through
Other modes are realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit,
Only a kind of logical function partition, there may be another division manner in actual implementation.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer
It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the present invention
The part steps of embodiment the method.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM,
Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. it is various
It can store the medium of program code.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Within mind and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the present invention.
Claims (14)
1. a kind of method of determining obstacle distance characterized by comprising
Mode based on deep learning obtains detection of obstacles model;
Visual sensor acquired image is obtained, inputs to the detection of obstacles model, in the described image exported
Barrier three-dimensional semantic information;
Determine barrier at a distance from automatic driving vehicle according to the three-dimensional semantic information of the barrier.
2. the method according to claim 1, wherein
The mode based on deep learning, obtaining detection of obstacles model includes:
Training sample is obtained, includes: the three-dimensional of training image and the barrier in the training image in each training sample
Semantic information;
The detection of obstacles model is obtained according to training sample training.
3. according to the method described in claim 2, it is characterized in that,
The three-dimensional semantic information of the barrier includes:
N number of key point, N are the positive integer greater than one;
The physics size of barrier;
Barrier towards angle.
4. according to the method described in claim 3, it is characterized in that,
The value of N is 8;
N number of key point is respectively to outline 8 vertex of the detection block of barrier.
5. according to the method described in claim 3, it is characterized in that,
The acquisition training sample includes:
Obtain the collecting vehicle each group training data collected for being equipped with visual sensor and laser radar, every group of training data
In include: the visual sensor image collected comprising barrier and corresponding laser radar point collected
Cloud data;
For every group of training data, carry out the following processing respectively:
Using the image in the training data as training image;
The physics size of barrier is determined according to the point cloud data in the training data and towards angle;
N number of key point of the barrier manually marked based on the point cloud data is obtained, and is projected on the training image;
By the training image and project to N number of key point, the physics size of barrier and obstacle on the training image
Object towards angle information as a training sample.
6. the method according to claim 1, wherein
The three-dimensional semantic information according to the barrier determines that barrier includes: at a distance from automatic driving vehicle
According to the three-dimensional semantic information of the barrier, barrier is transformed into three-dimensional space from two-dimensional space, according to Change-over knot
Fruit determines barrier at a distance from automatic driving vehicle.
7. a kind of device of determining obstacle distance characterized by comprising pretreatment unit and estimation unit;
The pretreatment unit obtains detection of obstacles model for the mode based on deep learning;
The estimation unit inputs to the detection of obstacles model, obtains for obtaining visual sensor acquired image
The three-dimensional semantic information of barrier in the described image of output, and determine to hinder according to the three-dimensional semantic information of the barrier
Hinder object at a distance from automatic driving vehicle.
8. device according to claim 7, which is characterized in that
It include: sample acquisition subelement and model training subelement in the pretreatment unit;
The sample acquisition subelement includes: training image and the instruction for obtaining training sample, in each training sample
Practice the three-dimensional semantic information of the barrier in image;
The model training subelement, for obtaining the detection of obstacles model according to training sample training.
9. device according to claim 8, which is characterized in that
The three-dimensional semantic information of the barrier includes:
N number of key point, N are the positive integer greater than one;
The physics size of barrier;
Barrier towards angle.
10. device according to claim 9, which is characterized in that
The value of N is 8;
N number of key point is respectively to outline 8 vertex of the detection block of barrier.
11. device according to claim 9, which is characterized in that
The sample acquisition subelement obtains the collecting vehicle each group instruction collected for being equipped with visual sensor and laser radar
Practice data, includes: the visual sensor image collected comprising barrier and corresponding institute in every group of training data
State laser radar point cloud data collected;
For every group of training data, carry out the following processing respectively:
Using the image in the training data as training image;
The physics size of barrier is determined according to the point cloud data in the training data and towards angle;
N number of key point of the barrier manually marked based on the point cloud data is obtained, and is projected on the training image;
By the training image and project to N number of key point, the physics size of barrier and obstacle on the training image
Object towards angle information as a training sample.
12. device according to claim 7, which is characterized in that
Barrier is transformed into three-dimensional space from two-dimensional space according to the three-dimensional semantic information of the barrier by the estimation unit
Between, determine barrier at a distance from automatic driving vehicle according to transformation result.
13. a kind of computer equipment, including memory, processor and it is stored on the memory and can be on the processor
The computer program of operation, which is characterized in that the processor is realized when executing described program as any in claim 1~6
Method described in.
14. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that described program is processed
Such as method according to any one of claims 1 to 6 is realized when device executes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710488088.XA CN109116374B (en) | 2017-06-23 | 2017-06-23 | Method, device and equipment for determining distance of obstacle and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710488088.XA CN109116374B (en) | 2017-06-23 | 2017-06-23 | Method, device and equipment for determining distance of obstacle and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109116374A true CN109116374A (en) | 2019-01-01 |
CN109116374B CN109116374B (en) | 2021-08-17 |
Family
ID=64732151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710488088.XA Active CN109116374B (en) | 2017-06-23 | 2017-06-23 | Method, device and equipment for determining distance of obstacle and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109116374B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008941A (en) * | 2019-06-05 | 2019-07-12 | 长沙智能驾驶研究院有限公司 | Drivable region detection method, device, computer equipment and storage medium |
CN110488319A (en) * | 2019-08-22 | 2019-11-22 | 重庆长安汽车股份有限公司 | A kind of collision distance calculation method and system merged based on ultrasonic wave and camera |
CN110502019A (en) * | 2019-09-06 | 2019-11-26 | 北京云迹科技有限公司 | A kind of barrier-avoiding method and device of Indoor Robot |
CN111160172A (en) * | 2019-12-19 | 2020-05-15 | 深圳佑驾创新科技有限公司 | Parking space detection method and device, computer equipment and storage medium |
CN111179300A (en) * | 2019-12-16 | 2020-05-19 | 新奇点企业管理集团有限公司 | Method, apparatus, system, device and storage medium for obstacle detection |
CN111324115A (en) * | 2020-01-23 | 2020-06-23 | 北京百度网讯科技有限公司 | Obstacle position detection fusion method and device, electronic equipment and storage medium |
CN111353453A (en) * | 2020-03-06 | 2020-06-30 | 北京百度网讯科技有限公司 | Obstacle detection method and apparatus for vehicle |
WO2020147500A1 (en) * | 2019-01-15 | 2020-07-23 | 北京百度网讯科技有限公司 | Ultrasonic array-based obstacle detection result processing method and system |
CN111950501A (en) * | 2020-08-21 | 2020-11-17 | 东软睿驰汽车技术(沈阳)有限公司 | Obstacle detection method and device and electronic equipment |
CN112613424A (en) * | 2020-12-27 | 2021-04-06 | 盛视达(天津)科技有限公司 | Rail obstacle detection method, rail obstacle detection device, electronic apparatus, and storage medium |
CN113110425A (en) * | 2021-03-29 | 2021-07-13 | 重庆智行者信息科技有限公司 | Target car system based on automatic driving |
CN113228043A (en) * | 2019-01-22 | 2021-08-06 | 深圳市大疆创新科技有限公司 | System and method for obstacle detection and association of mobile platform based on neural network |
CN113486837A (en) * | 2021-07-19 | 2021-10-08 | 安徽江淮汽车集团股份有限公司 | Automatic driving control method for low-pass obstacle |
CN113557524A (en) * | 2019-03-19 | 2021-10-26 | 罗伯特·博世有限公司 | Method for representing a mobile platform environment |
CN113692587A (en) * | 2019-02-19 | 2021-11-23 | 特斯拉公司 | Estimating object properties using visual images |
CN113777644A (en) * | 2021-08-31 | 2021-12-10 | 盐城中科高通量计算研究院有限公司 | Unmanned positioning method based on weak signal scene |
CN113848931A (en) * | 2021-10-09 | 2021-12-28 | 上海联适导航技术股份有限公司 | Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium |
CN114556449A (en) * | 2020-12-17 | 2022-05-27 | 深圳市大疆创新科技有限公司 | Obstacle detection and re-identification method and device, movable platform and storage medium |
CN115607052A (en) * | 2022-12-19 | 2023-01-17 | 科大讯飞股份有限公司 | Cleaning method, device and equipment of robot and cleaning robot |
WO2024087456A1 (en) * | 2022-10-26 | 2024-05-02 | 北京三快在线科技有限公司 | Determination of orientation information and autonomous vehicle |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101975951A (en) * | 2010-06-09 | 2011-02-16 | 北京理工大学 | Field environment barrier detection method fusing distance and image information |
CN104700414A (en) * | 2015-03-23 | 2015-06-10 | 华中科技大学 | Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera |
CN105425809A (en) * | 2015-12-02 | 2016-03-23 | 深圳市易飞行科技有限公司 | Obstacle avoiding method and system for unmanned plane |
CN105946853A (en) * | 2016-04-28 | 2016-09-21 | 中山大学 | Long-distance automatic parking system and method based on multi-sensor fusion |
CN106502267A (en) * | 2016-12-06 | 2017-03-15 | 上海师范大学 | A kind of unmanned plane avoidance system |
CN106707293A (en) * | 2016-12-01 | 2017-05-24 | 百度在线网络技术(北京)有限公司 | Obstacle recognition method and device for vehicles |
CN106873566A (en) * | 2017-03-14 | 2017-06-20 | 东北大学 | A kind of unmanned logistic car based on deep learning |
-
2017
- 2017-06-23 CN CN201710488088.XA patent/CN109116374B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101975951A (en) * | 2010-06-09 | 2011-02-16 | 北京理工大学 | Field environment barrier detection method fusing distance and image information |
CN104700414A (en) * | 2015-03-23 | 2015-06-10 | 华中科技大学 | Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera |
CN105425809A (en) * | 2015-12-02 | 2016-03-23 | 深圳市易飞行科技有限公司 | Obstacle avoiding method and system for unmanned plane |
CN105946853A (en) * | 2016-04-28 | 2016-09-21 | 中山大学 | Long-distance automatic parking system and method based on multi-sensor fusion |
CN106707293A (en) * | 2016-12-01 | 2017-05-24 | 百度在线网络技术(北京)有限公司 | Obstacle recognition method and device for vehicles |
CN106502267A (en) * | 2016-12-06 | 2017-03-15 | 上海师范大学 | A kind of unmanned plane avoidance system |
CN106873566A (en) * | 2017-03-14 | 2017-06-20 | 东北大学 | A kind of unmanned logistic car based on deep learning |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020147500A1 (en) * | 2019-01-15 | 2020-07-23 | 北京百度网讯科技有限公司 | Ultrasonic array-based obstacle detection result processing method and system |
US11933921B2 (en) | 2019-01-15 | 2024-03-19 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and system for processing obstacle detection result of ultrasonic sensor array |
CN113228043A (en) * | 2019-01-22 | 2021-08-06 | 深圳市大疆创新科技有限公司 | System and method for obstacle detection and association of mobile platform based on neural network |
CN113692587A (en) * | 2019-02-19 | 2021-11-23 | 特斯拉公司 | Estimating object properties using visual images |
US11790664B2 (en) | 2019-02-19 | 2023-10-17 | Tesla, Inc. | Estimating object properties using visual image data |
CN113557524A (en) * | 2019-03-19 | 2021-10-26 | 罗伯特·博世有限公司 | Method for representing a mobile platform environment |
CN110008941A (en) * | 2019-06-05 | 2019-07-12 | 长沙智能驾驶研究院有限公司 | Drivable region detection method, device, computer equipment and storage medium |
CN110488319A (en) * | 2019-08-22 | 2019-11-22 | 重庆长安汽车股份有限公司 | A kind of collision distance calculation method and system merged based on ultrasonic wave and camera |
CN110488319B (en) * | 2019-08-22 | 2023-04-07 | 重庆长安汽车股份有限公司 | Ultrasonic wave and camera fusion-based collision distance calculation method and system |
CN110502019A (en) * | 2019-09-06 | 2019-11-26 | 北京云迹科技有限公司 | A kind of barrier-avoiding method and device of Indoor Robot |
CN111179300A (en) * | 2019-12-16 | 2020-05-19 | 新奇点企业管理集团有限公司 | Method, apparatus, system, device and storage medium for obstacle detection |
CN111160172A (en) * | 2019-12-19 | 2020-05-15 | 深圳佑驾创新科技有限公司 | Parking space detection method and device, computer equipment and storage medium |
CN111160172B (en) * | 2019-12-19 | 2024-04-16 | 武汉佑驾创新科技有限公司 | Parking space detection method, device, computer equipment and storage medium |
CN111324115B (en) * | 2020-01-23 | 2023-09-19 | 北京百度网讯科技有限公司 | Obstacle position detection fusion method, obstacle position detection fusion device, electronic equipment and storage medium |
CN111324115A (en) * | 2020-01-23 | 2020-06-23 | 北京百度网讯科技有限公司 | Obstacle position detection fusion method and device, electronic equipment and storage medium |
CN111353453B (en) * | 2020-03-06 | 2023-08-25 | 北京百度网讯科技有限公司 | Obstacle detection method and device for vehicle |
CN111353453A (en) * | 2020-03-06 | 2020-06-30 | 北京百度网讯科技有限公司 | Obstacle detection method and apparatus for vehicle |
CN111950501A (en) * | 2020-08-21 | 2020-11-17 | 东软睿驰汽车技术(沈阳)有限公司 | Obstacle detection method and device and electronic equipment |
CN111950501B (en) * | 2020-08-21 | 2024-05-03 | 东软睿驰汽车技术(沈阳)有限公司 | Obstacle detection method and device and electronic equipment |
CN114556449A (en) * | 2020-12-17 | 2022-05-27 | 深圳市大疆创新科技有限公司 | Obstacle detection and re-identification method and device, movable platform and storage medium |
CN112613424A (en) * | 2020-12-27 | 2021-04-06 | 盛视达(天津)科技有限公司 | Rail obstacle detection method, rail obstacle detection device, electronic apparatus, and storage medium |
CN113110425A (en) * | 2021-03-29 | 2021-07-13 | 重庆智行者信息科技有限公司 | Target car system based on automatic driving |
CN113486837A (en) * | 2021-07-19 | 2021-10-08 | 安徽江淮汽车集团股份有限公司 | Automatic driving control method for low-pass obstacle |
CN113777644B (en) * | 2021-08-31 | 2023-06-02 | 盐城中科高通量计算研究院有限公司 | Unmanned positioning method based on weak signal scene |
CN113777644A (en) * | 2021-08-31 | 2021-12-10 | 盐城中科高通量计算研究院有限公司 | Unmanned positioning method based on weak signal scene |
WO2023056789A1 (en) * | 2021-10-09 | 2023-04-13 | 上海联适导航技术股份有限公司 | Obstacle identification method and system for automatic driving of agricultural machine, device, and storage medium |
CN113848931A (en) * | 2021-10-09 | 2021-12-28 | 上海联适导航技术股份有限公司 | Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium |
WO2024087456A1 (en) * | 2022-10-26 | 2024-05-02 | 北京三快在线科技有限公司 | Determination of orientation information and autonomous vehicle |
CN115607052A (en) * | 2022-12-19 | 2023-01-17 | 科大讯飞股份有限公司 | Cleaning method, device and equipment of robot and cleaning robot |
Also Published As
Publication number | Publication date |
---|---|
CN109116374B (en) | 2021-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109116374A (en) | Determine the method, apparatus, equipment and storage medium of obstacle distance | |
CN109271944B (en) | Obstacle detection method, obstacle detection device, electronic apparatus, vehicle, and storage medium | |
CN109084746B (en) | Monocular mode for autonomous platform guidance system with auxiliary sensor | |
EP3627180B1 (en) | Sensor calibration method and device, computer device, medium, and vehicle | |
US11379699B2 (en) | Object detection method and apparatus for object detection | |
KR102126724B1 (en) | Method and apparatus for restoring point cloud data | |
CN111079619B (en) | Method and apparatus for detecting target object in image | |
CN109059902A (en) | Relative pose determines method, apparatus, equipment and medium | |
CN109101861A (en) | Obstacle identity recognition methods, device, equipment and storage medium | |
CN109298629B (en) | System and method for guiding mobile platform in non-mapped region | |
CN109313810A (en) | System and method for being surveyed and drawn to environment | |
CN108629231A (en) | Obstacle detection method, device, equipment and storage medium | |
CN108734058B (en) | Obstacle type identification method, device, equipment and storage medium | |
CN109214348A (en) | A kind of obstacle detection method, device, equipment and storage medium | |
CN109145680A (en) | A kind of method, apparatus, equipment and computer storage medium obtaining obstacle information | |
CN109117691A (en) | Drivable region detection method, device, equipment and storage medium | |
CN109145677A (en) | Obstacle detection method, device, equipment and storage medium | |
CN108352056A (en) | System and method for correcting wrong depth information | |
CN109344804A (en) | A kind of recognition methods of laser point cloud data, device, equipment and medium | |
CN109633688A (en) | A kind of laser radar obstacle recognition method and device | |
CN109118532B (en) | Visual field depth estimation method, device, equipment and storage medium | |
CN110378966A (en) | Camera extrinsic scaling method, device, computer equipment and storage medium | |
US12073575B2 (en) | Object-centric three-dimensional auto labeling of point cloud data | |
CN110349212B (en) | Optimization method and device for instant positioning and map construction, medium and electronic equipment | |
CN111047634B (en) | Scene depth determination method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |