CN110147094A - A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system - Google Patents
A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system Download PDFInfo
- Publication number
- CN110147094A CN110147094A CN201811330157.5A CN201811330157A CN110147094A CN 110147094 A CN110147094 A CN 110147094A CN 201811330157 A CN201811330157 A CN 201811330157A CN 110147094 A CN110147094 A CN 110147094A
- Authority
- CN
- China
- Prior art keywords
- pose
- vehicle
- target
- signature
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000000007 visual effect Effects 0.000 claims abstract description 137
- 238000013507 mapping Methods 0.000 claims description 44
- 238000005259 measurement Methods 0.000 claims description 23
- 230000033001 locomotion Effects 0.000 claims description 22
- 238000001514 detection method Methods 0.000 claims description 17
- 238000003062 neural network model Methods 0.000 claims description 9
- 108010001267 Protein Subunits Proteins 0.000 claims description 2
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 230000015654 memory Effects 0.000 description 19
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 5
- 238000005070 sampling Methods 0.000 description 4
- 239000000047 product Substances 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/024—Guidance services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/40—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
- H04W4/48—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for in-vehicle communication
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Navigation (AREA)
Abstract
A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system, this method comprises: identifying preset visual signature in multiple target images that current time takes from multiple cameras of vehicle;The target signature to match with the visual signature is identified from automatic Pilot map of navigation electronic;Determine the vehicle in the pose at current time according to position of position and the visual signature of the target signature in the automatic Pilot map of navigation electronic in the target image.Implement the embodiment of the present invention, can be in the case where satellite positioning signal lack or is weaker, the visual information only provided using camera completes vehicle location, the accuracy rate of vehicle location when so as to improve automatic Pilot.
Description
Technical field
The present invention relates to automatic Pilot technical fields, and in particular to a kind of vehicle positioning method based on vehicle-mounted viewing system
And car-mounted terminal.
Background technique
In automatic Pilot navigation procedure, need to position vehicle location in real time, the vehicle provided on current market
Real-time locating scheme has: the vehicle location scheme based on satellite navigation system.The satellite positioning provided using satellite navigation system
Information, in conjunction with the measurement data of the sensors such as Inertial Measurement Unit, available accurate vehicle location position.
However, it has been found in practice that in the above-mentioned vehicle location scheme based on satellite navigation, when vehicle is got off with driving into
When the particular surroundings such as library, it is difficult to receive satellite positioning signal;Also, if rely only on the measurement data of Inertial Measurement Unit into
Row location Calculation is easy to produce accumulated error, and the accuracy rate of vehicle location at this time is lower, it is difficult to meet automatic Pilot to positioning accurate
The requirement of degree.
Summary of the invention
The embodiment of the invention discloses a kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system, Neng Gouti
The precision of vehicle location when high automatic Pilot.
First aspect of the embodiment of the present invention discloses a kind of vehicle positioning method based on vehicle-mounted viewing system, the method packet
It includes:
Identify that preset vision is special in multiple target images that current time takes from multiple cameras of vehicle
Sign;
The target signature to match with the visual signature is identified from automatic Pilot map of navigation electronic;According to described
Position and the visual signature of the target signature in the automatic Pilot map of navigation electronic are in the target image
Position determines the vehicle in the pose at current time.
As an alternative embodiment, in first aspect of the embodiment of the present invention, multiple camera shootings from vehicle
Head identifies preset visual signature in multiple target images that current time takes, comprising:
Obtain multiple target images that the multiple camera takes at current time;
Multiple described target images are spliced, obtain overlooking spliced map;
The vertical view spliced map is input to semantic feature detection model, and based on the defeated of the semantic feature detection model
Out as a result, determining the visual signature overlooked in spliced map;
Wherein, the semantic feature detection model is defeated as model using the sample image for being labeled with the visual signature
Enter the neural network model that training obtains.
As an alternative embodiment, in first aspect of the embodiment of the present invention,
The position according to the target signature in the automatic Pilot map of navigation electronic, the visual signature exist
Position in the target image determines the vehicle in the pose at current time, comprising:
The vision is calculated according to the position of the value and the visual signature of estimation pose in the target image
Mapping position of the Feature Mapping into the automatic Pilot map of navigation electronic;
The mapping position and the target signature for calculating the visual signature are in the automatic Pilot navigation through electronic
The first error between physical location in figure;
Judge whether the first error is less than specified threshold;
When the first error is greater than or equal to the specified threshold, the value of the estimation pose is adjusted, and is executed
The position according to the value and the visual signature of estimation pose in the target image calculates the visual signature
Map to the mapping position in the automatic Pilot map of navigation electronic;
When the first error is less than the specified threshold, the vehicle is determined according to the current value of the estimation pose
Current time pose;
Alternatively, the position according to the target signature in the automatic Pilot map of navigation electronic, the vision
Position of the feature in the target image determines the vehicle in the pose at current time, comprising:
According to the position of the value and the target signature of estimation pose in the automatic Pilot map of navigation electronic
It calculates the target signature and projects the projected position into the target image;
Calculate the projected position and actual bit of the visual signature in the target image of the target signature
The second error between setting;
Judge whether second error is less than specified threshold;
When second error is greater than or equal to the specified threshold, the value of the estimation pose is adjusted, and is executed
The position according to the value and the target signature of estimation pose in the automatic Pilot map of navigation electronic calculates
The target signature projects the projected position into the target image;
When second error is less than the specified threshold, the vehicle is determined according to the current value of the estimation pose
Current time pose.
As an alternative embodiment, in first aspect of the embodiment of the present invention, described according to estimation pose
The position of value and the visual signature in the target image calculates the visual signature and maps to the automatic Pilot
Before mapping position in map of navigation electronic, the method also includes:
On the basis of pose by the vehicle in last moment, the vehicle is calculated when described current in conjunction with motion model
That carves estimates pose, and executes the position according to the value and the visual signature of estimation pose in the target image
It sets and calculates the mapping position that the visual signature maps in the automatic Pilot map of navigation electronic;The last moment be
At the time of on time before the current time, the motion model by the vehicle Inertial Measurement Unit and/or wheel speed
The data determination of meter acquisition obtains;
And the position according to the value and the visual signature of estimation pose in the target image calculates
The visual signature maps to the mapping position in the automatic Pilot map of navigation electronic, comprising:
Using the value for estimating pose as the initial value of the estimation pose;
According to estimation pose current value and the visual signature in the target image position calculate described in
Visual signature maps to the mapping position in the automatic Pilot map of navigation electronic;
And the value and the target signature according to estimation pose is in the automatic Pilot map of navigation electronic
In position calculate the target signature and project the projected position into the target image, comprising:
Using the value for estimating pose as the initial value of the estimation pose;
According to the current value of estimation pose and the target signature in the automatic Pilot map of navigation electronic
Position calculates the target signature and projects the projected position into the target image.
As an alternative embodiment, the visual signature includes at least in first aspect of the embodiment of the present invention:
One of lane line, warehouse compartment line, warehouse compartment point, lane arrow are a variety of.
Second aspect of the embodiment of the present invention discloses a kind of car-mounted terminal, comprising:
Recognition unit identifies in multiple target images that current time takes for multiple cameras from vehicle
Preset visual signature;
Matching unit, for identifying the target to match with the visual signature from automatic Pilot map of navigation electronic
Feature;
Determination unit, for according to position of the target signature in the automatic Pilot map of navigation electronic and institute
Stating position of the visual signature in the target image determines the vehicle in the pose at current time.
As an alternative embodiment, in second aspect of the embodiment of the present invention, the recognition unit, comprising:
Obtain subelement, multiple target images taken for obtaining the multiple camera at current time;
Splice subelement, for splicing multiple described target images, obtains overlooking spliced map;
It identifies subelement, for the vertical view spliced map to be input to semantic feature detection model, and is based on the semanteme
The output of feature detection model is as a result, determine the visual signature overlooked in spliced map;
Wherein, the semantic feature detection model is defeated as model using the sample image for being labeled with the visual signature
Enter the neural network model that training obtains.
As an alternative embodiment, in second aspect of the embodiment of the present invention, the determination unit, comprising:
First computation subunit, for the value and the visual signature according to estimation pose in the target image
Position calculate the mapping position that the visual signature maps in the automatic Pilot map of navigation electronic;Alternatively, according to estimating
It counts the position of the value and the target signature of pose in the automatic Pilot map of navigation electronic and calculates the target spy
Projected position of the sign projection into the target image;
Second computation subunit, for calculating the mapping position of the visual signature with the target signature described
The first error between physical location in automatic Pilot map of navigation electronic;Alternatively, calculating the throwing of the target signature
The second error of shadow position and the visual signature between the physical location in the target image;
Judgment sub-unit, for judging whether the first error or second error are less than specified threshold;
Subelement is adjusted, for judging that the first error is greater than or equal to the specified threshold in the judgment sub-unit
When value, the value of the estimation pose is adjusted, and triggers first computation subunit and executes the taking according to estimation pose
The position calculating visual signature of value and the visual signature in the target image maps to the automatic Pilot and leads
The operation of mapping position in boat electronic map;Alternatively, judging that second error is greater than or waits in the judgment sub-unit
When the specified threshold, the value of the estimation pose is adjusted, and triggers first computation subunit and executes the basis
Estimate that the position of the value and the target signature of pose in the automatic Pilot map of navigation electronic calculates the target
The operation of projected position of the Projection Character into the target image;
Subelement is determined, for judging that the first error or second error are less than in the judgment sub-unit
When the specified threshold, determine the vehicle in the pose at current time according to the current value of the estimation pose.
As an alternative embodiment, in second aspect of the embodiment of the present invention, the car-mounted terminal further include:
Pose computing unit, for special according to the value of estimation pose and the vision in first computation subunit
Sign calculates the visual signature in the position in the target image and maps to reflecting in the automatic Pilot map of navigation electronic
It penetrates before position, on the basis of pose by the vehicle in last moment, calculates the vehicle in conjunction with motion model and work as described
The preceding moment estimates pose;At the time of the last moment is in time before the current time, the motion model
It determines to obtain by the data that the Inertial Measurement Unit and/or wheel speed meter of the vehicle acquire;
And first computation subunit, for the value and the visual signature according to estimation pose described
Position in target image calculates the mapping position that the visual signature maps in the automatic Pilot map of navigation electronic
Mode specifically:
First computation subunit, for using the calculated value for estimating pose of pose computing unit as institute
The initial value of estimation pose is stated, and according to the current value and the visual signature of the estimation pose in the target image
Position calculate the mapping position that the visual signature maps in the automatic Pilot map of navigation electronic;
First computation subunit, for being driven automatically according to the value and the target signature of estimation pose described
It sails the position in map of navigation electronic and calculates the mode that the target signature projects the projected position into the target image and have
Body are as follows:
First computation subunit, for being made with the calculated value for estimating pose of the pose computing unit
For the initial value of the estimation pose, and driven automatically according to the current value and the target signature of the estimation pose described
It sails the position in map of navigation electronic and calculates the target signature and project the projected position into the target image.
As an alternative embodiment, the visual signature includes at least in second aspect of the embodiment of the present invention:
One of lane line, warehouse compartment line, warehouse compartment point, lane arrow are a variety of.
The third aspect of the embodiment of the present invention discloses a kind of car-mounted terminal, comprising:
It is stored with the memory of executable program code;
The processor coupled with the memory;
The processor calls the executable program code stored in the memory, executes the embodiment of the present invention the
On the one hand any one of disclosed method.
Fourth aspect present invention discloses a kind of computer readable storage medium, stores computer program, wherein the meter
Calculation machine program makes computer execute the disclosed any one method of first aspect of the embodiment of the present invention.
The 5th aspect of the embodiment of the present invention discloses a kind of computer program product, when the computer program product is calculating
When being run on machine, so that the computer executes the disclosed any one method of first aspect of the embodiment of the present invention.
Compared with prior art, inventive point of the invention and realization the utility model has the advantages that
1, using the viewing system of multiple cameras composition camera, vehicle's surroundings are shot according to camera viewing system
Environment can use feature in environment position in the picture and these features in conjunction with automatic Pilot map of navigation electronic
Determine the pose at vehicle current time in position in map.Implement the embodiment of the present invention, it can be only complete using visual information
At vehicle location, and based on the camera plan of establishment looked around, single image information collection can be obtained the ring of vehicle's surroundings
Border information, so that the precision of vehicle location is higher.
2, preset visual signature is identified from vertical view spliced map using neural network, to be determined using visual signature
Position.Compared to traditional image recognition algorithm, the identification of feature can be made more accurate using identification Network Recognition visual signature.
First collected target image is spliced, then carry out Visual Feature Retrieval Process to spliced map is overlooked, compared to the side extracted one by one
Case can accelerate the speed of feature extraction.
3, motion model is established using the acquisition data of Inertial Measurement Unit and wheel speed meter, in conjunction with vehicle in last moment
Pose estimates out vehicle in the pose of estimating at current time, and to estimate pose as initial value, the value of iteration adjustment vehicle pose,
Until obtaining the higher positioning result of precision, the accuracy rate that vehicle positions in real time can be further improved.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a kind of process signal of vehicle positioning method based on vehicle-mounted viewing system disclosed by the embodiments of the present invention
Figure;
Fig. 2 is the process signal of another vehicle positioning method based on vehicle-mounted viewing system disclosed by the embodiments of the present invention
Figure;
Fig. 3 is a kind of exemplary diagram of the automatic Pilot map of navigation electronic in parking lot disclosed by the embodiments of the present invention;
Fig. 4 is a kind of structural schematic diagram of car-mounted terminal disclosed by the embodiments of the present invention;
Fig. 5 is the structural schematic diagram of another car-mounted terminal disclosed by the embodiments of the present invention;
Fig. 6 is the structural schematic diagram of another car-mounted terminal disclosed by the embodiments of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that the described embodiment is only a part of the embodiment of the present invention, instead of all the embodiments.Based on this
Embodiment in invention, every other reality obtained by those of ordinary skill in the art without making creative efforts
Example is applied, shall fall within the protection scope of the present invention.
It should be noted that term " includes " and " having " and their any changes in the embodiment of the present invention and attached drawing
Shape, it is intended that cover and non-exclusive include.Such as contain the process, method of a series of steps or units, system, product or
Equipment is not limited to listed step or unit, but optionally further comprising the step of not listing or unit or optional
Ground further includes the other step or units intrinsic for these process, methods, product or equipment.
The embodiment of the invention discloses a kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system, Neng Gouti
The accuracy rate of vehicle location when high automatic Pilot.It is described in detail separately below.
Embodiment one
Referring to Fig. 1, Fig. 1 is a kind of vehicle positioning method based on vehicle-mounted viewing system disclosed by the embodiments of the present invention
Flow diagram.Wherein, the vehicle positioning method based on vehicle-mounted viewing system described in Fig. 1 is suitable for vehicle-mounted computer, vehicle-mounted
Car-mounted terminals, the embodiment of the present invention such as industrial control computer (Industrial personal Computer, IPC) are not done
It limits.As shown in Figure 1, being somebody's turn to do the vehicle positioning method based on vehicle-mounted viewing system may comprise steps of:
101, car-mounted terminal identifies in multiple target images that current time takes from multiple cameras of vehicle
Preset visual signature.
In the embodiment of the present invention, car-mounted terminal can carry out data transmission with camera, and camera can be according to certain
Frequency collection image, and acquired image is transmitted to car-mounted terminal, so that car-mounted terminal handles image.In addition,
Above-mentioned multiple cameras are specifically as follows the camera on the front, rear, left and right four direction for being respectively arranged at vehicle, each
The viewfinder range of camera includes at least the ground below the camera.In the embodiment of the present invention, at current time, each camera shooting
Head can take an at least target image, and the quantity that multiple cameras have the target image taken altogether is at least not less than
The quantity of camera.As an alternative embodiment, above-mentioned camera can be fish-eye camera, fish-eye camera
Visual field (Field OF View, FOV) is larger, so as to the greatest extent may be used in the target image that single fish-eye camera takes
Can mostly including vehicle surrounding enviroment, raising target image in include information content, this also belong to inventive point of the invention it
One.
In the embodiment of the present invention, visual signature can be to have particular meaning by experience screening and facilitate vehicle location
Image, semantic feature.As an alternative embodiment, visual signature can be lane line, the warehouse compartment line, library on ground
Site, lane arrow etc., the embodiment of the present invention is without limitation.It, can also there may be multiple visual signatures in every target image
Can be not present visual signature, therefore, the embodiment of the present invention using multiple target images carry out vehicle location, multiple target images it
Between can carry out message complementary sense, improve the stability of positioning system, this also belongs to one of inventive point of the invention.
In addition, every target image that car-mounted terminal takes each camera, carries out the identification of visual signature.
Optionally, car-mounted terminal identifies that the mode of visual signature in a certain target image is specifically as follows: car-mounted terminal passes through depth
The image recognition algorithms such as study or image segmentation identify visual signature from target image, and the embodiment of the present invention is without limitation.
102, car-mounted terminal identifies the target signature to match with visual signature from automatic Pilot map of navigation electronic.
In the embodiment of the present invention, automatic Pilot map of navigation electronic is that the electronic map constructed in advance is particularly driven automatically
Sailing map of navigation electronic can be two dimensional image, and the map is built-up by multiple images semantic feature.For example, it asks
It is a kind of automatic Pilot map of navigation electronic in parking lot disclosed by the embodiments of the present invention refering to Fig. 3, Fig. 3 together, it can in map
To include the features such as the lane line, warehouse compartment line, warehouse compartment point in parking lot.
As an alternative embodiment, position of each image, semantic feature in automatic Pilot map of navigation electronic
The absolute coordinate based on world coordinate system can be used to be indicated, specific coordinate value can pass through GPS
(Global Positioning System, GPS) measurement obtains.As another optional embodiment, each image, semantic
Position of the feature in automatic Pilot map of navigation electronic can also be indicated using relative coordinate, i.e., each image, semantic is special
The relative position relative to preset coordinate origin is levied, which can be according to different scene settings.For example, exist
When constructing automatic Pilot map of navigation electronic, coordinate origin can be set by the entrance in parking lot, for each for constructing
The image, semantic feature of automatic Pilot map of navigation electronic measures the image, semantic feature relative to the opposite of Entrance
Position.
In the embodiment of the present invention, for some image, semantic feature in automatic Pilot map of navigation electronic, work as vehicle
When by way of this feature position, the camera of vehicle may take the target image comprising this feature.Therefore, car-mounted terminal
When the target signature that identification matches with the visual signature in target image, Scale invariant features transform specifically can be used
(Scale-invariant feature transform, SIFT), accelerate robust feature ((Speeded Up Robust
Features, SURF) etc. image matching algorithms identified from automatic Pilot map of navigation electronic and the vision in target image
The target signature that feature matches, the specific matching algorithm embodiment of the present invention is without limitation.
103, car-mounted terminal exists according to position of the target signature in automatic Pilot map of navigation electronic and visual signature
Position in target image determines vehicle in the pose at current time.
In the embodiment of the present invention, pose of the vehicle at current time includes position and posture of the vehicle at current time.It can
With understanding, if the position of each image, semantic feature carries out table using absolute coordinate in automatic Pilot map of navigation electronic
Show, then vehicle is also correspondingly indicated using absolute coordinate in the position at current time;If automatic Pilot navigation through electronic
The position of each image, semantic feature is identified using relative coordinate in map, then vehicle is in the position at current time also phase
It is indicated using relative coordinate with answering.
For a certain target signature in automatic Pilot map of navigation electronic, practical matched visual signature is this
Projection of the target signature on camera imaging plane (target image shot), specific projected position is according to camera shooting
The pose of vehicle determines when head takes target head portrait.It therefore, can be according to target signature in automatic Pilot map of navigation electronic
In position and visual signature position in the target image determine vehicle in the pose at current time.According to from target image
In the target signature that matches of multiple visual signatures for identifying and each visual signature in automatic Pilot map of navigation electronic
In position, vehicle can be calculated in multiple poses at current time, car-mounted terminal can be further according to vehicle current
Multiple poses at moment calculate a pose final in current time vehicle.
As it can be seen that in the method depicted in fig. 1, car-mounted terminal can identify multiple target figures that multiple cameras take
Visual signature as in, and identified and above-mentioned visual signature phase from the automatic Pilot map of navigation electronic constructed in advance
Matched multiple target signatures, so as to according to position of the target signature in automatic Pilot map of navigation electronic and vision
The position of feature in the target image determines that vehicle, can be the case where not depending on satellite positioning signal in the pose at current time
Under obtain the higher vehicle location result of precision.
Embodiment two
Referring to Fig. 2, Fig. 2 is another vehicle location side based on vehicle-mounted viewing system disclosed by the embodiments of the present invention
Method.As shown in Fig. 2, being somebody's turn to do the vehicle positioning method based on vehicle-mounted viewing system may comprise steps of:
201, car-mounted terminal obtains multiple target images that multiple cameras take at current time.
202, car-mounted terminal splices multiple target images, obtains overlooking spliced map.
In the embodiment of the present invention, car-mounted terminal projects to target image on ground level according to certain mapping ruler, and
According to overlapping region that may be present between each target image, multiple target images are spliced, are obtained comprising with vehicle
For the vertical view spliced map of the environmental information at center 360.
203, car-mounted terminal will overlook spliced map and be input to semantic feature detection model, and detect mould based on the semantic feature
The output of type is as a result, determine the visual signature overlooked in spliced map.
In the embodiment of the present invention, semantic feature detection model can be the neural network model suitable for deep learning, should
Model is obtained using the sample image for being labeled with visual signature as mode input training.
The neural network model is as follows: the network structure uses Encoder-Decoder model, mainly includes two portions
Point: coding (Encoder) partially with the part decoding (Decoder).
The image spliced is input in network in the embodiment of the present invention, wherein coded portion network mainly passes through convolution
The feature of image is extracted with pond layer.By there is the training for marking extensive sample, adjustment network parameter to encode net network
The accurate semantic feature of network and non-semantic feature.After coding network extracts feature by convolution twice, carried out down by pond
Sampling.By cascade four two layers of convolution add one layer of pond structure enable coding network top layer neuron receptive field
Cover the semantic primitive of the different scale in example of the present invention.
Decoding network be with coding network symmetrical structure, wherein the pond layer of coding network be changed to up-sampling layer.It is solving
By four up-samplings in code part, the feature that coding extracts is amplified to original image size, to realize pixel semantic classification.On
Sampling is realized by deconvolution, and this operation can obtain the most information of input data, but still will cause part letter
The loss of breath, therefore we introduce the feature of bottom to supplement the details lost in decoding process.These low-level image features are main
The convolutional layer for carrying out different scale in autoencoder network, the feature that coding network convolutional layer extracts on the same scale just can be with
Merge the characteristic pattern for generating more acurrate degree with deconvolution.Network training mainly use cross entropy come come measure the predicted value of network with
The difference of actual value, cross entropy formula are as follows:
Wherein y is the mark value of pictorial element, i.e. image a pixel is semantic primitive or non-semantic element, generally
Semantic primitive is indicated with 1, and 0 indicates non-semantic element;N is the sum of all pixels of image, and x is input, and a is the output a=of neuron
σ (z), z=∑jwjxj+ b, it can overcome the problems, such as that network weight is updated slow.After network model training is completed, in this hair
Express example in actual use, network is predicted for each pixel of input picture, exports the corresponding category of each pixel
Property value is 0 or 1, and the connection block labeled as 1 pictorial element is significant semantic image structure, so far realizes image
Semantic segmentation.Above-mentioned network structure is specially designed for stitching image semantic feature extraction, guarantees that semantic feature mentions
What is taken is accurate, belongs to one of inventive point of the invention.In addition, first splicing to target image, then mentioned from overlooking in spliced map
Image, semantic feature is taken, rather than extracts the image, semantic feature in target image one by one, mentioning for image, semantic feature can be improved
Efficiency is taken, one of inventive point of the invention is also belonged to.
204, car-mounted terminal identifies the target signature to match with visual signature from automatic Pilot map of navigation electronic.
205, on the basis of pose of the car-mounted terminal by vehicle in last moment, vehicle is calculated when current in conjunction with motion model
That carves estimates pose.
In the embodiment of the present invention, car-mounted terminal can calculate the positioning pose of vehicle according to certain frequency timing, above-mentioned
Motion model can be by the Inertial Measurement Unit (Inertial Measurement Unit, IMU) and/or wheel speed meter of vehicle
The data determination of acquisition obtains.Six axis Inertial Measurement Units can measure the 3-axis acceleration and angular speed of vehicle, and wheel speed meter can
To measure the vehicle wheel rotational speed of vehicle;According to the pose of vehicle last moment, the measurement of Inertial Measurement Unit and/or wheel speed meter is utilized
Data versus time is integrated, and can be calculated vehicle at current time and be estimated pose.Implement above-mentioned embodiment, this hair
It is bright to construct one based on IMU data and/or the motion model counted by speed.With at the uniform velocity model (i.e. two neighboring moment vehicle
Speed of related movement it is identical) compare, the precision of the motion model is higher than the precision of at the uniform velocity model, can preferably characterize vehicle
Actual motion conditions.However, since the error of integral can be accumulated as time increases and gradually in order to further increase
The precision of final vehicle location pose, the present invention can be determined according to by the calculated pose of estimating of above-mentioned motion model
The estimation range of vehicle location pose, to further determine that out the higher positioning pose of precision in the range.This is also belonged to
One of inventive point of the invention.
206, car-mounted terminal is using the above-mentioned value for estimating pose as the initial value of estimation pose.
207, car-mounted terminal is according to current value and above-mentioned the visual signature position in the target image of estimation pose
Set mapping position of the computation vision Feature Mapping into automatic Pilot map of navigation electronic.
In the embodiment of the present invention, when the car-mounted terminal mapping position of computation vision feature for the first time, determined using step 205
Initial value of the value for estimating pose out as estimation pose;Mapped bits are judged when vehicle termination executes following steps 209
It sets error between physical location and is greater than or equal to specified threshold, and after having adjusted the value of estimation pose, vehicle-mounted end
The current value for the estimation pose that end execution step 207 uses is the value after adjustment.In other possible embodiments, vehicle
Initial value of the arbitrary value as estimation pose can be used in mounted terminal.
208, the mapping position of car-mounted terminal computation vision feature and target signature are in automatic Pilot map of navigation electronic
Physical location between first error.
209, car-mounted terminal judges whether the first error is less than specified threshold, if so, step 210 is executed, if
No, the value of adjustment estimation pose simultaneously continues to execute step 207.
In the embodiment of the present invention, when error is less than specified threshold, it is believed that the error is can be received, at this time
The precision of positioning is higher.
210, car-mounted terminal determines vehicle in the pose at current time according to the current value of estimation pose.
In the embodiment of the present invention, above-mentioned step 206~step 209 is executed, car-mounted terminal can exist according to target signature
The position of position, visual signature in the target image in automatic Pilot map of navigation electronic determines vehicle in the position at current time
Appearance, specifically can be used the value for estimating pose that step 204 is determined as vehicle the i moment posture information PiJust
Value, and constantly iteration adjustment PiValue, until mapping position and target signature of the visual signature in map physical location
Between error it is minimum, and by P when error minimumiValue be determined as vehicle in the pose at current time.
In the embodiment of the present invention, as another optional embodiment, above-mentioned step 207~step 209 can be replaced
Change following steps into:
Car-mounted terminal is according to the value and the target signature of estimation pose in the automatic Pilot map of navigation electronic
In position calculate the target signature and project the projected position into the target image;
Car-mounted terminal calculates between the physical location of the projected position and visual signature of target signature in the target image
Second error;
Car-mounted terminal judges whether the second error is less than specified threshold, if so, step 210 is executed, if not, adjustment is estimated
It counts the value of pose and continues to execute value and the target signature according to estimation pose in automatic Pilot navigation electricity
Position in sub- map calculates the step of target signature projects the projected position into the target image.
Above-mentioned step can specifically be expressed as mathematical model below:
Pi=argmin (| | Xij-f(Pi, Aj)||);
Wherein, PiFor the posture information of i moment vehicle, AjIt is j-th of target signature in automatic Pilot digital navigation map
Position, XijFor the position of the visual signature that matches with j-th above-mentioned of target signature in the target image, f () is projection
Equation is used for AjProject to PiImaging plane, become its projection result and XijThe expression of the same form, so as to obtain
Estimate the observation of the current value mapping of pose and the error of actual observation, passes through the method optimizing camera position of nonlinear optimization
Appearance (i.e. vehicle pose) and observation are iteratively to reduce error, in the hope of the pose of maximum likelihood.That is, in the present invention
In embodiment, the value for estimating pose that step 204 can also be used to determine as vehicle the i moment posture information Pi's
Initial value, and constantly iteration adjustment PiValue, until the actual bit of target signature projected position in the picture and visual signature
Error between setting is minimum, and by P when error minimumiValue be determined as vehicle in the pose at current time.In conjunction with above-mentioned
Data model can be seen that the embodiment of the present invention after the estimation range for determining positioning pose, further excellent using numerical value
The algorithm of change determines the higher vehicle location pose of precision, also belongs to one of inventive point of the invention.
As it can be seen that in the method depicted in fig. 2, trained neural network model can be used from vertical view in car-mounted terminal
Visual signature is identified in spliced map, and preset visual signature can be fast and accurately extracted from image.Further, vehicle
Mounted terminal can also establish motion model according to the measurement data of Inertial Measurement Unit and/or wheel speed meter, in conjunction with motion model meter
The pose of estimating at vehicle current time is calculated, and using the value for estimating pose as the initial value of estimation pose, passes through continuous iteration
The value of adjustment estimation pose, finally determines the higher vehicle location pose of precision.
Embodiment three
Referring to Fig. 4, Fig. 4 is a kind of structural schematic diagram of car-mounted terminal disclosed by the embodiments of the present invention.As shown in figure 4,
The car-mounted terminal may include:
Recognition unit 401 is known in multiple target images that current time takes for multiple cameras from vehicle
It Chu not preset visual signature.
In the embodiment of the present invention, above-mentioned multiple cameras are specifically as follows the front, rear, left and right for being respectively arranged at vehicle
Camera on four direction, the viewfinder range of each camera include at least the ground below the camera.Visual signature can
Thinking has particular meaning by experience screening and facilitates the image, semantic feature of vehicle location.As a kind of optional implementation
Mode, visual signature can be lane line, warehouse compartment line, warehouse compartment point, the lane arrow etc. on ground, and the embodiment of the present invention does not limit
It is fixed.There may be multiple visual signatures in every target image, it is also possible to visual signature, therefore, the embodiment of the present invention be not present
Vehicle location is carried out using multiple target images, message complementary sense can be carried out between multiple target images, improve positioning system
Stability.Optionally, recognition unit 401 can be by image recognition algorithms such as deep learning or image segmentations from target image
Identify visual signature.
Matching unit 402 is identified for identifying from automatic Pilot map of navigation electronic with recognition unit 401
The target signature that visual signature matches.
In the embodiment of the present invention, matching unit 402 specifically can be used the image matching algorithms such as SIFT, SURF and lead from driving
The target signature to match with the visual signature in target image, the specific matching algorithm present invention are identified in boat electronic map
Embodiment is without limitation.
Determination unit 403, the target signature for being identified according to matching unit 402 is in automatic Pilot map of navigation electronic
In the visual signature that identifies of position and recognition unit 401 position in the target image determine vehicle at current time
Pose.
In the embodiment of the present invention, according to the multiple visual signatures and each visual signature phase identified from target image
Position of the matched target signature in automatic Pilot map of navigation electronic, determination unit 403 can calculate vehicle current
Multiple poses at moment, further, it is determined that unit 403 can calculate one according to vehicle in multiple poses at current time
A pose final in current time vehicle.
As it can be seen that implement car-mounted terminal shown in Fig. 4, can identify in multiple target images that multiple cameras take
Visual signature, and identify from the automatic Pilot map of navigation electronic constructed in advance and to match with above-mentioned visual signature
Multiple target signatures, so as to be existed according to position of the target signature in automatic Pilot map of navigation electronic and visual signature
Position in target image determines that vehicle in the pose at current time, can be obtained in the case where not depending on satellite positioning signal
The higher vehicle location result of precision.
Example IV
Referring to Fig. 5, Fig. 5 is the structural schematic diagram of another car-mounted terminal disclosed by the embodiments of the present invention.Wherein, Fig. 5
Shown in car-mounted terminal be that car-mounted terminal as shown in Figure 4 optimizes.As shown in figure 5, in the car-mounted terminal, on
The recognition unit 401 stated may include:
Obtain subelement 4011, multiple target images taken for obtaining multiple cameras at current time.
Splice subelement 4012, multiple target images for getting to acquisition subelement 4011 splice, and obtain
Overlook spliced map.
It identifies subelement 4013, is input to semantic feature detection for the vertical view spliced map that subelement 4012 obtains will to be spliced
Model, and the output based on the semantic feature detection model is as a result, determine the visual signature overlooked in spliced map.Wherein, above-mentioned
Semantic feature detection model be using being labeled with the sample image of visual signature as the trained obtained nerve net of mode input
Network model.As described in embodiment two, the embodiment of the present invention repeats no more the structure of the neural network model.
Optionally, in embodiments of the present invention, above-mentioned determination unit 403 may include:
First computation subunit 4031, the vision identified for the value and recognition unit 401 according to estimation pose
Mapping position of the position computation vision Feature Mapping into automatic Pilot map of navigation electronic of feature in the target image;Or
Person, the target signature identified according to the value of estimation pose and matching unit 402 is in automatic Pilot map of navigation electronic
Position calculate target signature project the projected position into target image;
Second computation subunit 4032, for calculating the mapped bits for the visual signature that the first computation subunit 4031 is determined
Set first mistake of the target signature identified with matching unit 402 between the physical location in automatic Pilot map of navigation electronic
Difference;Alternatively, for calculating the projected position and recognition unit 401 that calculate the target signature that the first computation subunit 4031 is determined
The second error between the physical location of the visual signature identified in the target image;
Judgment sub-unit 4033, for judging whether are the calculated first error of the second computation subunit or the second error
Less than specified threshold;
Subelement 4034 is adjusted, it is specified for judging that above-mentioned first error is greater than or equal in judgment sub-unit 4033
When threshold value, adjustment estimation pose value, and trigger the first computation subunit 4031 execute according to estimation pose value and
Mapping position of the position computation vision Feature Mapping into automatic Pilot map of navigation electronic of visual signature in the target image
Operation;Alternatively, when judgment sub-unit 4033 judges that the second above-mentioned error is greater than or equal to specified threshold, adjustment estimation
The value of pose, and trigger the first computation subunit 4031 and execute according to value and matching unit 402 identification for estimating pose
Position of the target signature out in automatic Pilot map of navigation electronic calculates target signature and projects the projection into target image
The operation of position;
Subelement 4035 is determined, for judging above-mentioned first error or the second error in judgment sub-unit 4033
When less than specified threshold, determine vehicle in the pose at current time according to the current value of estimation pose.
It is further alternative, in car-mounted terminal shown in Fig. 5, can also include:
Pose computing unit 404, for special according to the value and vision of estimation pose in the first computation subunit 4031
Before mapping position of the position computation vision Feature Mapping into automatic Pilot map of navigation electronic of sign in the target image, with
Vehicle calculates vehicle in conjunction with motion model and estimates pose at current time on the basis of the pose of last moment;Wherein, above-mentioned
Last moment be in time before current time at the time of, above-mentioned motion model by vehicle Inertial Measurement Unit
And/or the data determination of wheel speed meter acquisition obtains.
In the embodiment of the present invention, Inertial Measurement Unit can measure the 3-axis acceleration and angular speed of vehicle, and wheel speed meter can
To measure the vehicle wheel rotational speed of vehicle, pose computing unit 404 specifically can use the measurement of Inertial Measurement Unit and/or wheel speed meter
Data versus time is integrated, and is calculated vehicle at current time and is estimated pose.
Correspondingly, the first above-mentioned computation subunit 4031 is used for value and visual signature according to estimation pose in mesh
The mode of mapping position of the position computation vision Feature Mapping into automatic Pilot map of navigation electronic in logo image specifically:
First computation subunit 4031, for using the calculated value for estimating pose of pose computing unit 404 as estimating
The initial value of pose is counted, and special according to the position computation vision of the current value and visual signature of estimation pose in the target image
Sign maps to the mapping position in automatic Pilot map of navigation electronic;
And first computation subunit 4031 be used to be led according to the value and target signature of estimation pose in automatic Pilot
The mode that position calculating target signature in boat electronic map projects the projected position into target image is specifically as follows:
First computation subunit 4031, for using the calculated value for estimating pose of pose computing unit 404 as estimating
Count the initial value of pose, and the position according to the current value and target signature of estimation pose in automatic Pilot map of navigation electronic
It sets calculating target signature and projects the projected position into target image.
As it can be seen that in embodiments of the present invention, pose computing unit 404 can calculate the value for estimating pose, first is calculated
Subelement 4031 is to estimate the value of pose as the initial value of estimation pose, and according to the first computation subunit 4031, judgement
The corresponding operating that unit 4033 and adjustment subelement 4034 execute, the value of continuous iteration adjustment estimation pose, until vision is special
Error between the mapping position of sign and the physical location of target signature is minimum.When error minimum, 4035, subelement are determined
The current value of pose determines vehicle in the pose at current time, so as to calculate the higher vehicle of precision according to estimates
Position pose.
In conclusion implementing car-mounted terminal shown in fig. 5, trained neural network model can be used and spelled from vertical view
Visual signature is identified in map interlinking, to fast and accurately extract preset visual signature from image.Further, may be used also
To establish motion model according to the measurement data in terms of Inertial Measurement Unit and/or wheel speed, vehicle is calculated in conjunction with motion model and is worked as
The preceding moment estimates pose, and using the value for estimating pose as the initial value of estimation pose, estimates position by continuous iteration adjustment
The value of appearance finally determines the higher vehicle location pose of precision.
Embodiment five
Referring to Fig. 6, Fig. 6 is the structural schematic diagram of another car-mounted terminal disclosed by the embodiments of the present invention.Such as Fig. 6 institute
Show, which may include:
It is stored with the memory 601 of executable program code;
The processor 602 coupled with memory 601;
Wherein, processor 602 calls the executable program code stored in memory 601, executes Fig. 1 or shown in Fig. 2
Vehicle positioning method based on vehicle-mounted viewing system.
It should be noted that car-mounted terminal shown in fig. 6 can also include power supply, loudspeaker, screen, Wi-Fi module,
The components not showns such as bluetooth module, sensor, the present embodiment do not repeat.
The embodiment of the present invention discloses a kind of computer readable storage medium, stores computer program, wherein the computer
Program makes computer execute the vehicle positioning method shown in fig. 1 or fig. 2 based on vehicle-mounted viewing system.
The embodiment of the present invention discloses a kind of computer program product, which includes storing computer journey
The non-transient computer readable storage medium of sequence, and the computer program is operable to execute computer shown in Fig. 1 or Fig. 2
The vehicle positioning method based on vehicle-mounted viewing system.
It should be understood that " one embodiment " or " embodiment " that specification is mentioned in the whole text mean it is related with embodiment
A particular feature, structure, or characteristic is included at least one embodiment of the present invention.Therefore, occur everywhere in the whole instruction
" in one embodiment " or " in one embodiment " not necessarily refer to identical embodiment.In addition, these special characteristics, structure
Or characteristic can combine in any suitable manner in one or more embodiments.Those skilled in the art should also know that
Embodiment described in this description belongs to alternative embodiment, and not necessarily the present invention must for related actions and modules
Must.
In various embodiments of the present invention, it should be appreciated that magnitude of the sequence numbers of the above procedures are not meant to execute suitable
Successively, the execution sequence of each process should be determined by its function and internal logic the certainty of sequence, without coping with the embodiment of the present invention
Implementation process constitutes any restriction.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be object unit, can be in one place, or may be distributed over multiple networks
On unit.Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can integrate in one processing unit, it is also possible to
Each unit physically exists alone, and can also be integrated in one unit with two or more units.Above-mentioned integrated unit
Both it can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product,
It can store in a retrievable memory of computer.Based on this understanding, technical solution of the present invention substantially or
Person says all or part of of the part that contributes to existing technology or the technical solution, can be in the form of software products
It embodies, which is stored in a memory, including several requests are with so that a computer is set
Standby (can be personal computer, server or network equipment etc., specifically can be the processor in computer equipment) executes
Some or all of each embodiment above method of the invention step.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage
Medium include read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory,
RAM), programmable read only memory (Programmable Read-only Memory, PROM), erasable programmable is read-only deposits
Reservoir (Erasable Programmable Read Only Memory, EPROM), disposable programmable read-only memory (One-
Time Programmable Read-Only Memory, OTPROM), the electronics formula of erasing can make carbon copies read-only memory
(Electrically-Erasable Programmable Read-Only Memory, EEPROM), CD-ROM
(Compact Disc Read-Only Memory, CD-ROM) or other disc memories, magnetic disk storage, magnetic tape storage,
Or it can be used in any other computer-readable medium of carrying or storing data.
Above to a kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system disclosed by the embodiments of the present invention
It is described in detail, used herein a specific example illustrates the principle and implementation of the invention, the above reality
The explanation for applying example is merely used to help understand method and its core concept of the invention.Meanwhile for the general technology of this field
Personnel, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion this theory
Bright book content should not be construed as limiting the invention.
Claims (10)
1. a kind of vehicle positioning method based on vehicle-mounted viewing system characterized by comprising
Preset visual signature is identified in multiple target images that current time takes from multiple cameras of vehicle;
The target signature to match with the visual signature is identified from automatic Pilot map of navigation electronic;According to the target
Position of position and the visual signature of the feature in the automatic Pilot map of navigation electronic in the target image
Determine the vehicle in the pose at current time.
2. the vehicle positioning method according to claim 1 based on vehicle-mounted viewing system, which is characterized in that described from vehicle
Multiple cameras identify preset visual signature in multiple target images that current time takes, comprising:
Obtain multiple target images that the multiple camera takes at current time;
Multiple described target images are spliced, obtain overlooking spliced map;
The vertical view spliced map is input to semantic feature detection model, and the output knot based on the semantic feature detection model
Fruit determines the visual signature in the vertical view spliced map;
Wherein, the semantic feature detection model is to be instructed using the sample image for being labeled with the visual signature as mode input
The neural network model got.
3. the vehicle positioning method according to claim 1 or 2 based on vehicle-mounted viewing system, which is characterized in that described
According to position of the target signature in the automatic Pilot map of navigation electronic, the visual signature in the target image
Position determine the vehicle in the pose at current time, comprising:
The visual signature is calculated according to the position of the value and the visual signature of estimation pose in the target image
Map to the mapping position in the automatic Pilot map of navigation electronic;
The mapping position and the target signature for calculating the visual signature are in the automatic Pilot map of navigation electronic
Physical location between first error;
Judge whether the first error is less than specified threshold;
When the first error is greater than or equal to the specified threshold, the value of the estimation pose is adjusted, and described in execution
The visual signature mapping is calculated according to the position of the value and the visual signature of estimation pose in the target image
To the mapping position in the automatic Pilot map of navigation electronic;
When the first error is less than the specified threshold, determine that the vehicle exists according to the current value of the estimation pose
The pose at current time;
Alternatively, the position according to the target signature in the automatic Pilot map of navigation electronic, the visual signature
Position in the target image determines the vehicle in the pose at current time, comprising:
It is calculated according to the position of the value and the target signature of estimation pose in the automatic Pilot map of navigation electronic
The target signature projects the projected position into the target image;
Calculate the target signature the projected position and physical location of the visual signature in the target image it
Between the second error;
Judge whether second error is less than specified threshold;
When second error is greater than or equal to the specified threshold, the value of the estimation pose is adjusted, and described in execution
According to estimation pose value and the target signature in the automatic Pilot map of navigation electronic position calculate described in
Target signature projects the projected position into the target image;
When second error is less than the specified threshold, determine that the vehicle exists according to the current value of the estimation pose
The pose at current time.
4. the vehicle positioning method according to claim 3 based on vehicle-mounted viewing system, which is characterized in that in the basis
Estimate that the position of the value and the visual signature of pose in the target image calculates the visual signature and maps to institute
Before stating the mapping position in automatic Pilot map of navigation electronic, the method also includes:
On the basis of pose by the vehicle in last moment, the vehicle is calculated at the current time in conjunction with motion model
Pose is estimated, and executes the position according to the value and the visual signature of estimation pose in the target image and counts
Calculate the mapping position that the visual signature maps in the automatic Pilot map of navigation electronic;The last moment is in the time
On before the current time at the time of, the motion model is adopted by the Inertial Measurement Unit and/or wheel speed meter of the vehicle
The data determination of collection obtains;
And it is described according to estimation pose value and the visual signature in the target image position calculate described in
Visual signature maps to the mapping position in the automatic Pilot map of navigation electronic, comprising:
Using the value for estimating pose as the initial value of the estimation pose;
The vision is calculated according to the position of the current value and the visual signature of estimation pose in the target image
Mapping position of the Feature Mapping into the automatic Pilot map of navigation electronic;
And the value and the target signature according to estimation pose is in the automatic Pilot map of navigation electronic
Position calculates the target signature and projects the projected position into the target image, comprising:
Using the value for estimating pose as the initial value of the estimation pose;
According to the position of the current value and the target signature of estimation pose in the automatic Pilot map of navigation electronic
It calculates the target signature and projects the projected position into the target image.
5. the vehicle positioning method according to any one of claims 1 to 4 based on vehicle-mounted viewing system, which is characterized in that
The visual signature includes at least: one of lane line, warehouse compartment line, warehouse compartment point, lane arrow are a variety of.
6. a kind of car-mounted terminal characterized by comprising
Recognition unit identifies in multiple target images that current time takes default for multiple cameras from vehicle
Visual signature;
Matching unit, for identifying that the target to match with the visual signature is special from automatic Pilot map of navigation electronic
Sign;
Determination unit, for according to position of the target signature in the automatic Pilot map of navigation electronic and the view
Feel that position of the feature in the target image determines the vehicle in the pose at current time.
7. car-mounted terminal according to claim 6, which is characterized in that the recognition unit, comprising:
Obtain subelement, multiple target images taken for obtaining the multiple camera at current time;
Splice subelement, for splicing multiple described target images, obtains overlooking spliced map;
It identifies subelement, for the vertical view spliced map to be input to semantic feature detection model, and is based on the semantic feature
The output of detection model is as a result, determine the visual signature overlooked in spliced map;
Wherein, the semantic feature detection model is to be instructed using the sample image for being labeled with the visual signature as mode input
The neural network model got.
8. car-mounted terminal according to claim 6 or 7, which is characterized in that the determination unit, comprising:
First computation subunit, for the position according to the value and the visual signature of estimation pose in the target image
It sets and calculates the mapping position that the visual signature maps in the automatic Pilot map of navigation electronic;Alternatively, according to estimation position
The position of the value of appearance and the target signature in the automatic Pilot map of navigation electronic calculates the target signature and throws
Projected position of the shadow into the target image;
Second computation subunit, the mapping position and the target signature for calculating the visual signature are described automatic
The first error between physical location in driving navigation electronic map;Alternatively, calculating the projection position of the target signature
Set the second error with the visual signature between the physical location in the target image;
Judgment sub-unit, for judging whether the first error or second error are less than specified threshold;
Subelement is adjusted, for judging that the first error is greater than or equal to the specified threshold in the judgment sub-unit
When, the value of the estimation pose is adjusted, and trigger first computation subunit and execute the value according to estimation pose
And position of the visual signature in the target image calculates the visual signature and maps to the automatic Pilot navigation
The operation of mapping position in electronic map;Alternatively, judging that second error is greater than or equal in the judgment sub-unit
When the specified threshold, the value of the estimation pose is adjusted, and triggers the first computation subunit execution basis and estimates
It counts the position of the value and the target signature of pose in the automatic Pilot map of navigation electronic and calculates the target spy
The operation of projected position of the sign projection into the target image;
Determine subelement, it is described for judging that the first error or second error are less than in the judgment sub-unit
When specified threshold, determine the vehicle in the pose at current time according to the current value of the estimation pose.
9. car-mounted terminal according to claim 8, which is characterized in that the car-mounted terminal further include:
Pose computing unit, for being existed in first computation subunit according to the value and the visual signature of estimation pose
Position in the target image calculates the mapped bits that the visual signature maps in the automatic Pilot map of navigation electronic
Before setting, on the basis of pose by the vehicle in last moment, the vehicle is calculated when described current in conjunction with motion model
That carves estimates pose;At the time of the last moment is in time before the current time, the motion model is by institute
The data determination of the Inertial Measurement Unit and/or the acquisition of wheel speed meter of stating vehicle obtains;
And first computation subunit, for the value and the visual signature according to estimation pose in the target
Position in image calculates the mode for the mapping position that the visual signature maps in the automatic Pilot map of navigation electronic
Specifically:
First computation subunit, for being estimated described in using the calculated value for estimating pose of pose computing unit
Count the initial value of pose, and the position according to the current value and the visual signature of the estimation pose in the target image
It sets and calculates the mapping position that the visual signature maps in the automatic Pilot map of navigation electronic;
First computation subunit, for being led according to the value and the target signature of estimation pose in the automatic Pilot
Position in boat electronic map calculates the mode for the projected position that the target signature is projected into the target image specifically:
First computation subunit, for using the calculated value for estimating pose of the pose computing unit as institute
The initial value of estimation pose is stated, and is led according to the current value of the estimation pose and the target signature in the automatic Pilot
Position in boat electronic map calculates the target signature and projects the projected position into the target image.
10. according to the described in any item car-mounted terminals of claim 6~9, which is characterized in that the visual signature includes at least:
One of lane line, warehouse compartment line, warehouse compartment point, lane arrow are a variety of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811330157.5A CN110147094A (en) | 2018-11-08 | 2018-11-08 | A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811330157.5A CN110147094A (en) | 2018-11-08 | 2018-11-08 | A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110147094A true CN110147094A (en) | 2019-08-20 |
Family
ID=67588454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811330157.5A Pending CN110147094A (en) | 2018-11-08 | 2018-11-08 | A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110147094A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110567475A (en) * | 2019-09-19 | 2019-12-13 | 北京地平线机器人技术研发有限公司 | Navigation method, navigation device, computer readable storage medium and electronic equipment |
CN110967018A (en) * | 2019-11-25 | 2020-04-07 | 斑马网络技术有限公司 | Parking lot positioning method and device, electronic equipment and computer readable medium |
CN111508258A (en) * | 2020-04-17 | 2020-08-07 | 北京三快在线科技有限公司 | Positioning method and device |
CN111750891A (en) * | 2020-08-04 | 2020-10-09 | 博泰车联网(南京)有限公司 | Method, computing device, and computer storage medium for information processing |
CN112085034A (en) * | 2020-09-11 | 2020-12-15 | 北京埃福瑞科技有限公司 | Rail transit train positioning method and system based on machine vision |
WO2020253842A1 (en) * | 2019-06-20 | 2020-12-24 | 杭州海康威视数字技术股份有限公司 | Vehicle position and posture determination method and apparatus, and electronic device |
CN112446915A (en) * | 2019-08-28 | 2021-03-05 | 北京初速度科技有限公司 | Picture-establishing method and device based on image group |
CN112530270A (en) * | 2019-09-17 | 2021-03-19 | 北京初速度科技有限公司 | Mapping method and device based on region allocation |
CN112577513A (en) * | 2019-09-27 | 2021-03-30 | 北京初速度科技有限公司 | State quantity error determination method and vehicle-mounted terminal |
CN112577512A (en) * | 2019-09-27 | 2021-03-30 | 北京初速度科技有限公司 | State quantity error determination method based on wheel speed fusion and vehicle-mounted terminal |
CN112749584A (en) * | 2019-10-29 | 2021-05-04 | 北京初速度科技有限公司 | Vehicle positioning method based on image detection and vehicle-mounted terminal |
CN112837365A (en) * | 2019-11-25 | 2021-05-25 | 北京初速度科技有限公司 | Image-based vehicle positioning method and device |
CN114111774A (en) * | 2021-12-06 | 2022-03-01 | 纵目科技(上海)股份有限公司 | Vehicle positioning method, system, device and computer readable storage medium |
CN114323020A (en) * | 2021-12-06 | 2022-04-12 | 纵目科技(上海)股份有限公司 | Vehicle positioning method, system, device and computer readable storage medium |
WO2022116572A1 (en) * | 2020-12-02 | 2022-06-09 | 魔门塔(苏州)科技有限公司 | Target positioning method and apparatus |
CN115359460A (en) * | 2022-10-20 | 2022-11-18 | 小米汽车科技有限公司 | Image recognition method and device for vehicle, vehicle and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101008566A (en) * | 2007-01-18 | 2007-08-01 | 上海交通大学 | Intelligent vehicular vision device based on ground texture and global localization method thereof |
CN104833370A (en) * | 2014-02-08 | 2015-08-12 | 本田技研工业株式会社 | System and method for mapping, localization and pose correction |
US20170249751A1 (en) * | 2016-02-25 | 2017-08-31 | Technion Research & Development Foundation Limited | System and method for image capture device pose estimation |
CN107328411A (en) * | 2017-06-30 | 2017-11-07 | 百度在线网络技术(北京)有限公司 | Vehicle positioning system and automatic driving vehicle |
CN107886080A (en) * | 2017-11-23 | 2018-04-06 | 同济大学 | One kind is parked position detecting method |
CN108537197A (en) * | 2018-04-18 | 2018-09-14 | 吉林大学 | A kind of lane detection prior-warning device and method for early warning based on deep learning |
US20200124421A1 (en) * | 2018-10-19 | 2020-04-23 | Samsung Electronics Co., Ltd. | Method and apparatus for estimating position |
-
2018
- 2018-11-08 CN CN201811330157.5A patent/CN110147094A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101008566A (en) * | 2007-01-18 | 2007-08-01 | 上海交通大学 | Intelligent vehicular vision device based on ground texture and global localization method thereof |
CN104833370A (en) * | 2014-02-08 | 2015-08-12 | 本田技研工业株式会社 | System and method for mapping, localization and pose correction |
US20170249751A1 (en) * | 2016-02-25 | 2017-08-31 | Technion Research & Development Foundation Limited | System and method for image capture device pose estimation |
CN107328411A (en) * | 2017-06-30 | 2017-11-07 | 百度在线网络技术(北京)有限公司 | Vehicle positioning system and automatic driving vehicle |
CN107886080A (en) * | 2017-11-23 | 2018-04-06 | 同济大学 | One kind is parked position detecting method |
CN108537197A (en) * | 2018-04-18 | 2018-09-14 | 吉林大学 | A kind of lane detection prior-warning device and method for early warning based on deep learning |
US20200124421A1 (en) * | 2018-10-19 | 2020-04-23 | Samsung Electronics Co., Ltd. | Method and apparatus for estimating position |
Non-Patent Citations (1)
Title |
---|
董海鹰: "3.自动编码器", 《智能控制理论及应用》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020253842A1 (en) * | 2019-06-20 | 2020-12-24 | 杭州海康威视数字技术股份有限公司 | Vehicle position and posture determination method and apparatus, and electronic device |
CN112446915B (en) * | 2019-08-28 | 2024-03-29 | 北京初速度科技有限公司 | Picture construction method and device based on image group |
CN112446915A (en) * | 2019-08-28 | 2021-03-05 | 北京初速度科技有限公司 | Picture-establishing method and device based on image group |
CN112530270B (en) * | 2019-09-17 | 2023-03-14 | 北京初速度科技有限公司 | Mapping method and device based on region allocation |
CN112530270A (en) * | 2019-09-17 | 2021-03-19 | 北京初速度科技有限公司 | Mapping method and device based on region allocation |
CN110567475B (en) * | 2019-09-19 | 2023-09-29 | 北京地平线机器人技术研发有限公司 | Navigation method, navigation device, computer readable storage medium and electronic equipment |
CN110567475A (en) * | 2019-09-19 | 2019-12-13 | 北京地平线机器人技术研发有限公司 | Navigation method, navigation device, computer readable storage medium and electronic equipment |
CN112577513A (en) * | 2019-09-27 | 2021-03-30 | 北京初速度科技有限公司 | State quantity error determination method and vehicle-mounted terminal |
CN112577512A (en) * | 2019-09-27 | 2021-03-30 | 北京初速度科技有限公司 | State quantity error determination method based on wheel speed fusion and vehicle-mounted terminal |
CN112749584B (en) * | 2019-10-29 | 2024-03-15 | 北京魔门塔科技有限公司 | Vehicle positioning method based on image detection and vehicle-mounted terminal |
CN112749584A (en) * | 2019-10-29 | 2021-05-04 | 北京初速度科技有限公司 | Vehicle positioning method based on image detection and vehicle-mounted terminal |
CN112837365B (en) * | 2019-11-25 | 2023-09-12 | 北京魔门塔科技有限公司 | Image-based vehicle positioning method and device |
CN112837365A (en) * | 2019-11-25 | 2021-05-25 | 北京初速度科技有限公司 | Image-based vehicle positioning method and device |
CN110967018B (en) * | 2019-11-25 | 2024-04-12 | 斑马网络技术有限公司 | Parking lot positioning method and device, electronic equipment and computer readable medium |
CN110967018A (en) * | 2019-11-25 | 2020-04-07 | 斑马网络技术有限公司 | Parking lot positioning method and device, electronic equipment and computer readable medium |
CN111508258A (en) * | 2020-04-17 | 2020-08-07 | 北京三快在线科技有限公司 | Positioning method and device |
CN111750891B (en) * | 2020-08-04 | 2022-07-12 | 上海擎感智能科技有限公司 | Method, computing device, and computer storage medium for information processing |
CN111750891A (en) * | 2020-08-04 | 2020-10-09 | 博泰车联网(南京)有限公司 | Method, computing device, and computer storage medium for information processing |
CN112085034A (en) * | 2020-09-11 | 2020-12-15 | 北京埃福瑞科技有限公司 | Rail transit train positioning method and system based on machine vision |
WO2022116572A1 (en) * | 2020-12-02 | 2022-06-09 | 魔门塔(苏州)科技有限公司 | Target positioning method and apparatus |
CN114323020B (en) * | 2021-12-06 | 2024-02-06 | 纵目科技(上海)股份有限公司 | Vehicle positioning method, system, equipment and computer readable storage medium |
CN114111774A (en) * | 2021-12-06 | 2022-03-01 | 纵目科技(上海)股份有限公司 | Vehicle positioning method, system, device and computer readable storage medium |
CN114323020A (en) * | 2021-12-06 | 2022-04-12 | 纵目科技(上海)股份有限公司 | Vehicle positioning method, system, device and computer readable storage medium |
CN114111774B (en) * | 2021-12-06 | 2024-04-16 | 纵目科技(上海)股份有限公司 | Vehicle positioning method, system, equipment and computer readable storage medium |
CN115359460A (en) * | 2022-10-20 | 2022-11-18 | 小米汽车科技有限公司 | Image recognition method and device for vehicle, vehicle and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110147094A (en) | A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system | |
CN110136199A (en) | A kind of vehicle location based on camera, the method and apparatus for building figure | |
CN107742311B (en) | Visual positioning method and device | |
US10319107B2 (en) | Remote determination of quantity stored in containers in geographical region | |
JP6926335B2 (en) | Variable rotation object detection in deep learning | |
CN110148170A (en) | A kind of positioning initialization method and car-mounted terminal applied to vehicle location | |
CN111856963B (en) | Parking simulation method and device based on vehicle-mounted looking-around system | |
CN105608417B (en) | Traffic lights detection method and device | |
CN106529538A (en) | Method and device for positioning aircraft | |
CN108168539A (en) | A kind of blind man navigation method based on computer vision, apparatus and system | |
CN105955308A (en) | Aircraft control method and device | |
CN108256411A (en) | By the method and system of camera review vehicle location | |
CN110136058A (en) | It is a kind of that drawing method and car-mounted terminal are built based on vertical view spliced map | |
CN109073385A (en) | A kind of localization method and aircraft of view-based access control model | |
CN111104867A (en) | Recognition model training and vehicle heavy recognition method and device based on component segmentation | |
CN103929597A (en) | Shooting assisting method and device | |
CN112771576A (en) | Position information acquisition method, device and storage medium | |
CN110832494A (en) | Semantic generation method, equipment, aircraft and storage medium | |
CN108475442A (en) | Augmented reality method, processor and unmanned plane for unmanned plane | |
EP3537310A1 (en) | Methods for navigating through a set of images | |
CN110799983A (en) | Map generation method, map generation equipment, aircraft and storage medium | |
CN109740479A (en) | A kind of vehicle recognition methods, device, equipment and readable storage medium storing program for executing again | |
CN111275015A (en) | Unmanned aerial vehicle-based power line inspection electric tower detection and identification method and system | |
CN113011285B (en) | Lane line detection method and device, automatic driving vehicle and readable storage medium | |
CN111928857B (en) | Method and related device for realizing SLAM positioning in dynamic environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220303 Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing Applicant after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd. Address before: 100083 room 28, 4 / F, block a, Dongsheng building, 8 Zhongguancun East Road, Haidian District, Beijing Applicant before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190820 |
|
RJ01 | Rejection of invention patent application after publication |