CN110135376A - Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor - Google Patents
Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor Download PDFInfo
- Publication number
- CN110135376A CN110135376A CN201910424026.1A CN201910424026A CN110135376A CN 110135376 A CN110135376 A CN 110135376A CN 201910424026 A CN201910424026 A CN 201910424026A CN 110135376 A CN110135376 A CN 110135376A
- Authority
- CN
- China
- Prior art keywords
- point
- coordinate system
- dimensional
- location information
- initial value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
Abstract
In accordance with an embodiment of the present disclosure, method, equipment and the computer readable storage medium of the coordinate system conversion parameter for imaging sensor are provided.This method includes generating the initial value of coordinate system conversion parameter by obtaining the location information of imaging sensor;Based on location information, the first set of the point in the three-dimensional system of coordinate for predetermined roadway characteristic is obtained from three-dimensional map;From the second set of the point in the two-dimensional coordinate system determined in the two dimensional image captured by imaging sensor for predetermined roadway characteristic;Based on initial value, by the point projection in first set into two dimensional image to obtain the third set of subpoint;And based on the matching between the point in third set and second set, initial value is adjusted, to determine coordinate system conversion parameter.Thus, it is possible to the coordinate system conversion parameter of the imaging sensor of trackside be directly determined, without relying on the relative Calibration of onboard sensor device, to improve the flexibility and universality of the parameter calibration of imaging sensor.
Description
Technical field
Embodiment of the disclosure relates generally to interaction field outside vehicle, and more particularly, to for determining image sensing
Method, apparatus, equipment and the computer readable storage medium of the coordinate system conversion parameter of device.
Background technique
In recent years, the automatic Pilot the relevant technologies of autonomous parking, video monitoring etc. are gradually shown up prominently.Automatic Pilot
The basis of technology is the perception to vehicle-periphery, that is, identifies the specific situation of ambient enviroment.It has been proposed that in addition to vehicle-mounted
Except the sensor device (for example, mobile lidar) of (also referred to as " vehicle side "), can by vehicle outside (also referred to as " trackside ")
Sensor device (for example, the imaging sensor for being mounted on road two sides) obtain related data and preferably support automatic Pilot
Technology.
Currently, the external parameters calibration of the imaging sensor outside vehicle is (for example, determine that camera coordinates system turns to world coordinate system
Change parameter) it is to be realized by the relationship between calibration mobile lidar and the imaging sensor.However, stopping in such as underground
In some scenes in parking lot, tunnel etc., may there is no global positioning system (GPS) signal, be not provided with laser radar sensing yet
Device, therefore, it is difficult to the outer parameter mark to the imaging sensor being arranged in the scene is executed by way of demarcating above-mentioned relation
It is fixed, it causes not being available the data from the imaging sensor.
Summary of the invention
In accordance with an embodiment of the present disclosure, it provides a kind of for determining the side of the coordinate system conversion parameter of imaging sensor
Case.
In the first aspect of the disclosure, provide a kind of for determining the side of the coordinate system conversion parameter of imaging sensor
Method.This method comprises: generating the initial of the coordinate system conversion parameter by the location information for obtaining described image sensor
Value;Based on the location information, the first collection of the point in the three-dimensional system of coordinate for predetermined roadway characteristic is obtained from three-dimensional map
It closes;It is determined in the two-dimensional coordinate system for the predetermined roadway characteristic from the two dimensional image captured by described image sensor
The second set of point;Based on the initial value, the point in the first set is projected into the two dimensional image to be thrown
The third set of shadow point;And it based on the matching between the point in the third set and the second set, adjusts described first
Initial value, with the determination coordinate system conversion parameter.
In the second aspect of the disclosure, provide a kind of for determining the dress of the coordinate system conversion parameter of imaging sensor
It sets.The device includes: initial value determining module, is configured to obtain the location information of described image sensor to generate
The initial value of the coordinate system conversion parameter;First set obtains module, the location information is configured for, from three-dimensional
Map obtains the first set of the point in the three-dimensional system of coordinate for predetermined roadway characteristic;Second set obtains module, is configured
For being determined in the two-dimensional coordinate system for the predetermined roadway characteristic from the two dimensional image captured by described image sensor
Point second set;Third set obtains module, the initial value is configured for, by the point in the first set
Projection is into the two dimensional image to obtain the third set of subpoint;And parameter determination module, it is configured for institute
The matching between the point in third set and the second set is stated, the initial value is adjusted, with the determination coordinate system conversion
Parameter.
In the third aspect of the disclosure, a kind of equipment, including one or more processors are provided;And storage dress
It sets, for storing one or more programs, when one or more programs are executed by one or more processors, so that one or more
The method that a processor realizes the first aspect according to the disclosure.
In the fourth aspect of the disclosure, a kind of computer readable storage medium is provided, is stored thereon with computer journey
Sequence realizes the method for the first aspect according to the disclosure when program is executed by processor.
It should be appreciated that content described in Summary be not intended to limit embodiment of the disclosure key or
Important feature, it is also non-for limiting the scope of the present disclosure.The other feature of the disclosure will become easy reason by description below
Solution.
Detailed description of the invention
It refers to the following detailed description in conjunction with the accompanying drawings, the above and other feature, advantage and aspect of each embodiment of the disclosure
It will be apparent.In the accompanying drawings, the same or similar attached drawing mark indicates the same or similar element, in which:
Multiple embodiments that Fig. 1 shows the disclosure can be in the schematic diagram for the example context wherein realized;
Fig. 2 shows according to the coordinate system conversion parameters for determining imaging sensor of some embodiments of the present disclosure
The flow chart of exemplary method;
Fig. 3 shows the schematic diagram of the example static map according to some embodiments of the present disclosure;
Fig. 4 shows the flow chart of the exemplary method for obtaining first set according to some embodiments of the present disclosure;
Fig. 5 shows the flow chart of the exemplary method for obtaining second set according to some embodiments of the present disclosure;
Fig. 6 is shown according to some embodiments of the present disclosure for adjusting initial value to determine coordinate system conversion parameter
The flow chart of exemplary method;
Fig. 7 shows the coordinate system conversion parameter according to some embodiments of the present disclosure for determining imaging sensor
The schematic block diagram of device;And
Fig. 8 shows the block diagram that can implement the calculating equipment of multiple embodiments of the disclosure.
Specific embodiment
Embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the certain of the disclosure in attached drawing
Embodiment, it should be understood that, the disclosure can be realized by various forms, and should not be construed as being limited to this
In the embodiment that illustrates, providing these embodiments on the contrary is in order to more thorough and be fully understood by the disclosure.It should be understood that
It is that being given for example only property of the accompanying drawings and embodiments effect of the disclosure is not intended to limit the protection scope of the disclosure.
In the description of embodiment of the disclosure, term " includes " and its similar term should be understood as that opening includes,
I.e. " including but not limited to ".Term "based" should be understood as " being based at least partially on ".Term " one embodiment " or " reality
Apply example " it should be understood as " at least one embodiment ".Term " first ", " second " etc. may refer to different or identical right
As.Hereafter it is also possible that other specific and implicit definition.
In the description of embodiment of the disclosure, in the description of embodiment of the disclosure, " coordinate system conversion parameter " example
Such as can be convert between camera coordinates system, plane of delineation coordinate system or pixel coordinate system and world coordinate system it is required
Parameter, such as translation matrix, spin matrix etc..In embodiment of the disclosure, " coordinate system conversion parameter " may include or refer to
For so-called " the outer parameter " in this field." outer parameter " refers to camera coordinates system, plane of delineation coordinate system or pixel coordinate system
To the conversion parameter of world coordinate system." external parameters calibration " refers to camera coordinates system, plane of delineation coordinate system or pixel coordinate
It is to the determination of the conversion parameter of world coordinate system.Also, in the description of embodiment of the disclosure, for convenience, term
" outer parameter " can be replaced with term " coordinate system conversion parameter ".
In the description of embodiment of the disclosure, term " imaging sensor " reference can capture image data such as picture
Sensor device of data and/or video data, such as camera, camera etc..
As previously mentioned, at present the external parameters calibration of the imaging sensor outside vehicle be by calibration mobile lidar with
Relationship between the imaging sensor realizes, and in some scenes of such as underground parking etc., without GPS signal,
It is not provided with laser radar sensor, therefore, it is difficult to the outer ginseng to imaging sensor is executed by way of demarcating above-mentioned relation
Number calibration.
According to various embodiments of the present disclosure, a kind of coordinate system conversion parameter for directly determining imaging sensor is provided
Scheme, without depending on the relative Calibration of onboard sensor device.In embodiment of the disclosure, turned by setting coordinate system
The initial value for changing parameter determines the three-dimensional point set for being directed to predetermined roadway characteristic, the two of imaging sensor from three-dimensional map
The determining two-dimentional point set for being directed to the predetermined roadway characteristic in image is tieed up, and based on two-dimentional point set and utilizes initial value by three
The matching that dimension point set maps between projection point set obtained in two dimensional image is suitable finally to determine to adjust initial value
Coordinate system conversion parameter.
It is passed it will be appreciated that being applicable not only to the image in the not scene of GPS signal according to the scheme of the embodiment of the present disclosure
The parameter calibration of sensor, and the parameter calibration of the imaging sensor suitable for the scene for having GPS signal.According to disclosure reality
The flexibility and universality of the parameter calibration of imaging sensor can be improved in the scheme for applying example.
Hereinafter reference will be made to the drawings to specifically describe embodiment of the disclosure.Fig. 1 shows multiple embodiment energy of the disclosure
Enough schematic diagrames in the example traffic environment 100 wherein realized.Some typical objects are diagrammatically illustrated in the example context 100
Body, including road 102, the pedestrian 109 traffic instruction facility 103 and be likely to occur.Road 102 includes lane line 101, lane
Center line 104, curb 106 and pavement 108.It should be appreciated that the facility and object shown in these are only examples, according to reality
Situation, there is the object being likely to occur in different traffic environments will change.The scope of the present disclosure is not limited in this respect.
In the example of fig. 1, one or more vehicle 110-1,110-2 are just travelled on road 102.For ease of description,
Multiple vehicle 110-1,110-2 are referred to as vehicle 110.Vehicle 110, which can be, can carry people and/or object and pass through engine
Etc. the mobile any kind of vehicle of dynamical systems, including but not limited to car, truck, bus, electric vehicle, motorcycle, caravan,
Train etc..One or more vehicles 110 in environment 100 can be the vehicle with certain automatic Pilot ability, such
Vehicle is also referred to as automatic driving vehicle.Certainly, another in environment 100 or some vehicles 110 can also be do not have
The vehicle of automatic Pilot ability.
It there also is provided one or more sensors 105-1 to 105-6 (being referred to as sensor 105) in environment 100.Sensor
105 independently of vehicle 110, for monitoring the situation of environment 100, to obtain perception information relevant to environment 100.For full side
Position monitoring environment 100, sensor 105 can be disposed near road 102, and may include the biography of one or more types
Sensor.For example, sensor 105 can be disposed in the two sides of road 102 at regular intervals, for monitoring the specific of environment 100
Region.It can be disposed with a plurality of types of sensors in each area.In some instances, in addition to sensor 105 is fixed
Except specific position, moveable sensor 105 can also be set, move perception website etc..
In embodiment of the disclosure, such as sensor 105-1 and sensor 105-6 is imaging sensor.To sensing
When device 105-1 and sensor 105-6 executes external parameters calibration, sensor 105-1 and sensor 105-6 captured image data can
Equipment 120 is calculated to be provided to, for making when determining the coordinate system conversion parameter of sensor 105-1 and sensor 105-6
With.Calculating equipment 120 can be any server or client device for supporting the determination of coordinate system conversion parameter.Below with reference to
The determination process of Fig. 2 detailed description coordinate system conversion parameter.For ease of description, being carried out below in conjunction with traffic environment shown in Fig. 1
It discusses.
Fig. 2 shows according to the coordinate system conversion parameters for determining imaging sensor of some embodiments of the present disclosure
The flow chart of exemplary method 200.This method 200 can be implemented at the calculating equipment 120 of Fig. 1.As shown in Fig. 2, in frame 210,
Coordinate system conversion can be generated by obtaining the location information of imaging sensor (such as sensor 105-1) by calculating equipment 120
The initial value of parameter.
The location information of imaging sensor 105-1 can be obtained by any desired manner by calculating equipment 120.According to this public affairs
The some embodiments opened calculate the location information of the available imaging sensor 105-1 of equipment 120.For example, location information can be with
For latitude and longitude information etc..In some embodiments, the available measurement determination by handhold GPS equipment of equipment 120 is calculated
The location information of imaging sensor 105-1.In some embodiments, the available measurement by total station of equipment 120 is calculated
The location information of determining imaging sensor 105-1.In some embodiments, calculate the available user of equipment 120 input or
The location information of the imaging sensor 105-1 of estimation.It will be appreciated that the location information can be through other any suitable devices
Or method obtains, the application do not do any restrictions to this.Also, the location information, which can be to convert suitable for coordinate system, joins
Several any suitable forms.
According to some embodiments of the present disclosure, the directional information of the available imaging sensor 105-1 of equipment 120 is calculated.
For example, directional information can be angle information, such as azimuth.In some embodiments, calculate that equipment 120 is available passes through
The directional information of the determining imaging sensor 105-1 of the measurement of bevel protractor.In some embodiments, calculating equipment 120 can obtain
Take the directional information of the imaging sensor 105-1 of family input or estimation.It will be appreciated that direction information can be by other
What any suitable device or method obtained, the application does not do any restrictions to this.Also, direction information, which can be, to be suitable for
Any suitable form of coordinate system conversion parameter.
According to some embodiments of the present disclosure, calculate the available imaging sensor 105-1 of equipment 120 location information and
Both directional informations.It is possible thereby to optimize the determination of the initial value of coordinate system conversion parameter.In addition to location information and directional information
It outside, also may include the other location informations that can be used for coordinate system conversion parameter and determine of the known in the art or following exploitation, this
Any restrictions are not done in application to this.
After the location information for obtaining imaging sensor 105-1, seat can be generated based on location information by calculating equipment 120
The initial value of mark system conversion parameter.In some embodiments, can be believed based on the position of identified imaging sensor 105-1
Breath, translation matrix needed for generating coordinate system conversion.It in some embodiments, can be based on identified imaging sensor 105-
1 directional information, spin matrix needed for generating coordinate system conversion.It, can be by generation according to some embodiments of the present disclosure
Translation matrix and spin matrix are determined as the initial value of coordinate system conversion parameter.It will be appreciated that the initial value may be available
In any other suitable form of coordinate system conversion parameter, the application does not do any restrictions to this.
With reference to Fig. 2, in frame 220, location information can be based on by calculating equipment 120, obtained from three-dimensional map and be directed to predetermined road
The first set of point in the three-dimensional system of coordinate of road feature.In accordance with an embodiment of the present disclosure, three-dimensional map can be high-precision map.
For example, in some embodiments, three-dimensional map can be the high-precision map of static state associated with environment 100.Static high-precision map
The relevant information of stationary body including environment 100.Static high-precision map can be pre- by the sensor 105 arranged in environment 100
First collected information relevant to environment 100 and generate.It only include protruding from ground simultaneously in environment 100 in static high-precision map
And the relevant information for the object being remain stationary in relatively long time.
Fig. 3 shows the example of the high-precision map 300 of a static state associated with the environment 100 of Fig. 1.Compared to environment
Only include stationary object in static high-precision map 300 for 100, such as road 102, lane line 101, lane center 104,
Curb 106, pavement 108, the upright bar for being disposed with sensor 105, traffic instruction facility 103 etc..These objects are in a period of time
It is inside to maintain static.The objects such as vehicle 110 and pedestrian 109 sometimes appear in environment 100, sometimes can be from ring
It disappears in border 100, or is moved in environment 100.Therefore, such object is referred to as dynamic object.
It should be appreciated that the high-precision map 300 of static state shown in Fig. 3 merely to the purpose of diagram and provide.In general, high-precisionly
In figure other than schematically showing object or providing the image of object, the other information of object can be also marked, such as accurately
Position, speed, direction etc..It in some implementations, can also include the relevant information of object in three dimensions.
The process of first set is obtained from three-dimensional map below in conjunction with Fig. 4 description.Fig. 4 shows one according to the disclosure
The flow chart of the exemplary method 400 for obtaining first set of a little embodiments.This method 400 can calculate at equipment 120
Implement.
As shown in figure 4, calculating the available three-dimensional map of equipment 120 in frame 410.In some embodiments, three-dimensional map
The relevant information of environment 100 can be acquired by map data collecting vehicle and is generated based on such information.For example, being directed to
There is no the scene of GPS signal, can by position immediately with map structuring (SLAM) method, allow collecting vehicle to have GPS letter from outdoor
Number position drive into the scene, using mobile lidar, camera and look around image acquisition system and acquire road environment
Information is identified later and is merged, and together by collected data investigation, generates three-dimensional map.
In some embodiments, three-dimensional map can also be updated periodically or by corresponding event triggering.For example, updating
Period can be set to relatively long a period of time.For example, it is also possible to based on being disposed in environment 100 and real-time monitoring ring
The collected perception information of sensor 105 in border 100 is updated.It will be appreciated that other any desired manners can be passed through
Three-dimensional map is generated, the application do not do any restrictions to the generating mode of three-dimensional map.
In frame 420, calculating equipment 120 can be marked predetermined based on the location information of imaging sensor in three-dimensional map
Roadway characteristic.Calculating equipment 120 can be based on the location information of imaging sensor 105-1, corresponding with the location information three
Predetermined roadway characteristic is marked in dimension map 300.In some embodiments, can in three-dimensional map 300, with imaging sensor
Predetermined roadway characteristic is searched centered on the corresponding position of the location information of 105-1, with predetermined search radius.
For example, in some embodiments, all predetermined roadway characteristics in three-dimensional map 300 can be successively traversed, and
If the predetermined roadway characteristic marks the predetermined roadway characteristic in search radius (such as 1000 meters), otherwise without mark
Note.It will be appreciated that search radius can be arbitrarily arranged, the application does not do any restrictions to this.In addition, the mark of predetermined roadway characteristic
Note can be implemented by any appropriate Feature Extraction Technology of the known in the art or following exploitation, and which is not described herein again.
In accordance with an embodiment of the present disclosure, three-dimensional map includes various static natures relevant to road, such as pavement
108, lane line 101, lane center 104, curb 106, traffic instruction facility (such as traffic light, traffic sign etc.)
103, the facility (such as upright bar, plant of both sides of the road etc.) etc. of both sides of the road.It in some embodiments, can be quiet from these
Select a feature (such as pavement 108) as predetermined roadway characteristic in state roadway characteristic.In some embodiments, Ke Yicong
Select multiple features (such as lane center 104, traffic instruction facility 103 etc.) as predetermined road in these static roadway characteristics
Road feature.It will be appreciated that above-mentioned example is to the application and does not constitute a limitation, predetermined roadway characteristic can be relevant to road
Meaning static nature.In some embodiments, predetermined roadway characteristic may include at least one in following item: pavement, lane
Line, lane center, curb, traffic indicate facility.
In frame 430, first set can be determined based on the predetermined roadway characteristic of mark by calculating equipment 120.In some implementations
In example, the corresponding point of predetermined roadway characteristic of mark can be extracted to form three-dimensional point set (the i.e. first collection by calculating equipment 120
It closes).
Fig. 2 is returned to, in frame 230, calculating equipment 120 can be from the two dimensional image captured by imaging sensor 105-1 really
Surely for the second set of the point in the two-dimensional coordinate system of predetermined roadway characteristic.The second collection is obtained below with reference to Fig. 5 detailed description
The process of conjunction.Fig. 5 shows the process of the exemplary method 500 for obtaining second set according to some embodiments of the present disclosure
Figure.This method 500 can be implemented at equipment 120 calculating.
As shown in figure 5, calculating the available two dimensional image captured by imaging sensor 105-1 of equipment 120 in frame 510.
In some embodiments, two dimensional image can be picture format.In some embodiments, two dimensional image can be data frame stream
Format.
In some embodiments, calculate that equipment 120 is available is captured within a predetermined period of time by imaging sensor 105-1
Two dimensional image.In some embodiments, the available two dimension captured in real time by imaging sensor 105-1 of equipment 120 is calculated
Image.In some embodiments, calculate equipment 120 it is available by imaging sensor 105-1 the past predetermined time or when
Between section capture two dimensional image.
According to some embodiments of the present disclosure, the available imaging sensor 105-1 by after calibrated of equipment 120 is calculated
The two dimensional image of capture.In some embodiments, imaging sensor 105-1 can capture the X-Y scheme after intrinsic parameter is demarcated
Picture.Intrinsic parameter refers to parameter relevant to imaging sensor self-characteristic.By taking camera as an example, intrinsic parameter refer to such as focal length, as
The parameters such as plain size.In some embodiments, imaging sensor 105-1 can capture the two dimensional image after distortion correction.?
In some embodiments, imaging sensor 105-1 can be demarcated through intrinsic parameter and be captured the two dimensional image after distortion correction.Thus may be used
To be conducive to improve the accuracy of the external parameters calibration of imaging sensor.
In frame 520, the second set for being directed to predetermined roadway characteristic can be determined from two dimensional image by calculating equipment 120.Needle
To the identical predetermined roadway characteristic determined with frame 220, calculating equipment 120 can determine from the two dimensional image that frame 510 obtains
Corresponding two dimension point set.
In some embodiments, predetermined roadway characteristic can be extracted from two dimensional image, and the predetermined road based on extraction
Feature determines the set of corresponding characteristic point.In some embodiments, can from two dimensional image the artificial selection predetermined road
Feature, and the set of the corresponding characteristic point under two-dimensional coordinate system (camera coordinates system) of the predetermined roadway characteristic is determined as
Two set.It will be appreciated that predetermined roadway characteristic can pass through the known in the art or following exploitation from the extraction in two dimensional image
Other any appropriate Feature Extraction Technologies are implemented, and which is not described herein again, in order to avoid obscure the present invention.
Fig. 2 is returned to, in frame 240, initial value can be based on by calculating equipment 120, and the point in first set is projected to two dimension
To obtain the third set of subpoint in image.Calculating equipment 120 can be using initial value as coordinate system conversion parameter, by first
The three-dimensional coordinate of each point in set is converted into two-dimensional coordinate, to obtain the third set of subpoint.
In frame 250, calculating equipment 120 can be adjusted just based on the matching between the point in third set and second set
Initial value, finally to determine coordinate system conversion parameter.In some embodiments, third set and the can be chosen by calculating equipment 120
A part of point in two set, and initial value is adjusted based on the matching between this partial dot.In some embodiments, it counts
Initial value can be adjusted for the matching of each point in third set and second set by calculating equipment 120.
In some embodiments, initial value can be adjusted based on the error in the matching by calculating equipment 120.In some realities
It applies in example, initial value can be adjusted based on the re-projection error in the matching by calculating equipment 120.Below with reference to Fig. 6 to this into
Row detailed description.Fig. 6 is shown according to some embodiments of the present disclosure for adjusting initial value to determine coordinate system conversion ginseng
The flow chart of several exemplary methods 600.This method 600 can be implemented at equipment 120 calculating.
As shown in fig. 6, calculating equipment 120 can be for each point in third set, from second set in frame 610
Determine closest approach corresponding to this point.In some embodiments, calculate equipment 120 can by third set each point with it is right
The closest approach answered forms matching double points, to obtain the set of matching double points.
In frame 620, the throwing again between each point and corresponding closest approach in third set can be determined by calculating equipment 120
Shadow error.The determination of re-projection error can be implemented by any desired manner of the known in the art or following exploitation, this Shen
Please with no restriction to this.
In frame 630, calculating equipment 120 can determine whether re-projection error is lower than predetermined threshold.In some embodiments,
Predetermined threshold can be predetermined.In some embodiments, predetermined threshold can be what user inputted or specified.Some
In embodiment, predetermined threshold, which can according to need, to be adjusted.
If determining that re-projection error is greater than or equal to predetermined threshold in frame 630, enter frame 640.In frame 640, calculate
The adjustable initial value of equipment 120.In some embodiments, initial value can be determined based on re-projection error by calculating equipment 120
Gradient direction, and modify initial value along the negative gradient direction opposite with gradient direction.And based on modified initial
Value, repeats the processing of frame 620 and frame 630, until determining that re-projection error is lower than predetermined threshold in frame 630.Work as re-projection
When error is lower than predetermined threshold, into frame 650, current initial value is determined as coordinate system conversion parameter.
It will be appreciated that other appropriate parties of the known in the art or following exploitation also can be used other than re-projection error
Error in matching of the formula to characterize the point in third set and second set.Further, it should be understood that arriving, the processing of Fig. 6 is only to show
Example, is not construed as limiting the application.It can implement the adjusting of initial value by other suitable nonlinear optimization algorithms.
So far the method for the coordinate system conversion parameter for determining imaging sensor according to the embodiment of the present disclosure is described.
According to various embodiments of the present disclosure, the coordinate system conversion parameter that imaging sensor can be directly determined, it is vehicle-mounted without relying on
The relative Calibration of sensor device, to improve the flexibility and universality of the parameter calibration of imaging sensor.
Embodiment of the disclosure additionally provides the related device for realizing the above method or process.
Fig. 7 shows the coordinate system conversion parameter according to some embodiments of the present disclosure for determining imaging sensor
The schematic block diagram of device 700.The device 700 can be implemented at the calculating equipment 120 of such as Fig. 1.As shown in fig. 7,
Device 700 may include initial value determining module 710, first set obtains module 720, second set obtains module 730, third
Set obtains module 740 and parameter determination module 750.
According to some embodiments of the present disclosure, initial value determining module 710 can be configured to obtain image biography
The location information of sensor (such as sensor 105-1) generates the initial value of coordinate system conversion parameter.In some embodiments, fixed
Position information may include at least one in the location information and directional information of imaging sensor.In some embodiments, position
Information can be determined by the measurement of handhold GPS equipment.In some embodiments, directional information can pass through the survey of bevel protractor
Amount is to determine.Initial value determining module 710 can be configured to execute the respective handling referring to described in Fig. 2 center 210.
According to some embodiments of the present disclosure, first set, which obtains module 720, can be configured for location information,
The first set of the point in the three-dimensional system of coordinate for predetermined roadway characteristic is obtained from three-dimensional map.In some embodiments, in advance
Determining roadway characteristic may include at least one in following item: pavement, lane line, lane center, curb, traffic instruction are set
It applies.
In some embodiments, it may include (not shown) that first set, which obtains module 720: map acquiring unit,
It is configured for obtaining three-dimensional map;Feature marks unit, is configured for location information to mark in three-dimensional map
Predetermined roadway characteristic;And first set determination unit, the predetermined roadway characteristic of mark is configured for determine first
Set.Map acquiring unit, feature mark unit and first set determination unit can be configured to execute the phase referring to described in Fig. 4
It should handle.
According to some embodiments of the present disclosure, second set obtains module 730 and can be configured for from by image sensing
The point in the two-dimensional coordinate system for the predetermined roadway characteristic is determined in the two dimensional image of device (such as sensor 105-1) capture
Second set.In some embodiments, it may include (not shown) that second set, which obtains module 730: image obtains single
Member is configured for obtaining the two dimensional image captured by imaging sensor (such as sensor 105-1);And second set
Determination unit is configured for determining the second set for being directed to the predetermined roadway characteristic from two dimensional image.Image obtains single
Member and second set determination unit can be configured to execute the respective handling referring to described in Fig. 5.
According to some embodiments of the present disclosure, third set, which obtains module 740, can be configured for initial value, will
Point projection in first set is into two dimensional image to obtain the third set of subpoint.Third set obtains module 740 can be with
Using initial value as coordinate system conversion parameter, the three-dimensional coordinate of each point in first set is converted into two-dimensional coordinate, thus
Obtain the third set of subpoint.
According to some embodiments of the present disclosure, parameter determination module 750 can be configured for third set and the
The matching between point in two set, adjusts initial value, to determine coordinate system conversion parameter.In some embodiments, parameter is true
Cover half block 750 may include (not shown): closest approach determination unit, be configured for for each of third set
Point determines closest approach corresponding with the point from second set;Error determination unit, be configured for determining the point with it is right
The re-projection error between the closest approach answered;And unit is adjusted, it is configured for high in response to the re-projection error
In or equal to the predetermined threshold, the initial value is adjusted, until the re-projection error is lower than the predetermined threshold.Closest approach
Determination unit, error determination unit and adjusting unit can be configured to execute the respective handling referring to described in Fig. 6.
It should be appreciated that each unit recorded in device 700 respectively with reference Fig. 2, fig. 4 to fig. 6 describe method 200,
400, each step in 500,600 is corresponding.Also, the operation of device 700 and unit wherein included and feature are all corresponding
It in operation and feature above in association with Fig. 2, fig. 4 to fig. 6 description, and has same effect, detail repeats no more.
Included unit can use various modes to realize in device 700, including software, hardware, firmware or its
Meaning combination.In some embodiments, software and/or firmware can be used to realize in one or more units, such as is stored in
Machine-executable instruction on storage media.Other than machine-executable instruction or alternatively, part in device 700 or
Person's whole unit can be realized at least partly by one or more hardware logic components.It as an example, not a limit, can be with
The hardware logic component of the exemplary type used include field programmable gate array (FPGA), specific integrated circuit (ASIC), specially
With standard items (ASSP), system on chip (SOC), Complex Programmable Logic Devices (CPLD), etc..
These units shown in fig. 7 can partially or entirely be embodied as hardware module, software module, firmware module
Or any combination thereof.Particularly, in certain embodiments, above-described process, method or process can be by storage systems
Or it host corresponding with storage system or realizes independently of the hardware in other calculating equipment of storage system.
Fig. 8 shows the schematic block diagram that can be used to implement the example apparatus 800 of embodiment of the disclosure.Equipment 800
It can be used to implement and calculate equipment 120.As shown, equipment 800 includes central processing unit (CPU) 801, it can be according to depositing
It stores up the computer program instructions in read-only memory (ROM) 802 or is loaded into random access storage device from storage unit 808
(RAM) computer program instructions in 803, to execute various movements appropriate and processing.In RAM 803, it can also store and set
Various programs and data needed for standby 800 operation.CPU801, ROM 802 and RAM 803 is connected with each other by bus 804.It is defeated
Enter/export (I/O) interface 805 and is also connected to bus 804.
Multiple components in equipment 800 are connected to I/O interface 805, comprising: input unit 806, such as keyboard, mouse etc.;
Output unit 807, such as various types of displays, loudspeaker etc.;Storage unit 808, such as disk, CD etc.;And it is logical
Believe unit 809, such as network interface card, modem, wireless communication transceiver etc..Communication unit 809 allows equipment 800 by such as
The computer network of internet and/or various telecommunication networks exchange information/data with other equipment.
Processing unit 801 executes each method as described above and processing, such as method 200,400,500,600.Example
Such as, in some embodiments, method 200,400,500,600 can be implemented as computer software programs, be physically include
In machine readable media, such as storage unit 808.In some embodiments, some or all of of computer program can be through
It is loaded into and/or is installed in equipment 800 by ROM 802 and/or communication unit 809.When computer program loads to RAM
803 and by CPU 801 execute when, the one or more steps of method as described above 200,400,500,600 can be executed.It is standby
Selection of land, in other embodiments, CPU 801 can be matched by other any modes (for example, by means of firmware) appropriate
It is set to execution method 200,400,500,600.
For implement disclosed method program code can using any combination of one or more programming languages come
It writes.These program codes can be supplied to the place of general purpose computer, special purpose computer or other programmable data processing units
Device or controller are managed, so that program code makes defined in flowchart and or block diagram when by processor or controller execution
Function/operation is carried out.Program code can be executed completely on machine, partly be executed on machine, as stand alone software
Is executed on machine and partly execute or executed on remote machine or server completely on the remote machine to packet portion.
In the context of the disclosure, machine readable media can be tangible medium, may include or is stored for
The program that instruction execution system, device or equipment are used or is used in combination with instruction execution system, device or equipment.Machine can
Reading medium can be machine-readable signal medium or machine-readable storage medium.Machine readable media can include but is not limited to electricity
Son, magnetic, optical, electromagnetism, infrared or semiconductor system, device or equipment or above content any conjunction
Suitable combination.The more specific example of machine readable storage medium will include the electrical connection of line based on one or more, portable meter
Calculation machine disk, hard disk, random access memory (RAM), read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM
Or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage facilities or
Any appropriate combination of above content.
Although this should be understood as requiring operating in this way with shown in addition, depicting each operation using certain order
Certain order out executes in sequential order, or requires the operation of all diagrams that should be performed to obtain desired result.
Under certain environment, multitask and parallel processing be may be advantageous.Similarly, although containing several tools in being discussed above
Body realizes details, but these are not construed as the limitation to the scope of the present disclosure.In the context of individual embodiment
Described in certain features can also realize in combination in single realize.On the contrary, in the described in the text up and down individually realized
Various features can also realize individually or in any suitable subcombination in multiple realizations.
Although having used specific to this theme of the language description of structure feature and/or method logical action, answer
When understanding that theme defined in the appended claims is not necessarily limited to special characteristic described above or movement.On on the contrary,
Special characteristic described in face and movement are only to realize the exemplary forms of claims.
Claims (18)
1. a kind of method for determining the coordinate system conversion parameter of imaging sensor, comprising:
The initial value of the coordinate system conversion parameter is generated by obtaining the location information of described image sensor;
Based on the location information, the first collection of the point in the three-dimensional system of coordinate for predetermined roadway characteristic is obtained from three-dimensional map
It closes;
It is determined in the two-dimensional coordinate system for the predetermined roadway characteristic from the two dimensional image captured by described image sensor
Point second set;
Based on the initial value, by the point projection in the first set into the two dimensional image to obtain the third of subpoint
Set;And
Based on the matching between the point in the third set and the second set, the initial value is adjusted, described in determination
Coordinate system conversion parameter.
2. according to the method described in claim 1, wherein obtaining the location information includes at least one of the following:
Obtain the location information of described image sensor;And
Obtain the directional information of described image sensor.
3. according to the method described in claim 2, the location information for wherein obtaining described image sensor includes:
Obtain the location information determining by the measurement of Handheld Global movable positioning system.
4. according to the method described in claim 2, the directional information for wherein obtaining described image sensor includes:
Obtain the directional information determining by the measurement of bevel protractor.
5. according to the method described in claim 1, wherein the predetermined roadway characteristic includes at least one in following item: people's row
Road, lane line, lane center, curb, traffic indicate facility.
6. according to the method described in claim 1, wherein obtaining the first set and including:
Obtain the three-dimensional map;
Based on the location information, the predetermined roadway characteristic is marked in the three-dimensional map;And
The predetermined roadway characteristic based on mark, determines the first set.
7. according to the method described in claim 1, wherein determining that the second set includes:
Obtain the two dimensional image captured by described image sensor;And
The second set for being directed to the predetermined roadway characteristic is determined from the two dimensional image.
8. method according to any one of claim 1 to 7, wherein adjusting the initial value and including:
For each point in the third set, closest approach corresponding with the point is determined from the second set;
Determine the re-projection error between the point and the corresponding closest approach;And
It is greater than or equal to the predetermined threshold in response to the re-projection error, the initial value is adjusted, until the re-projection
Error is lower than the predetermined threshold.
9. a kind of for determining the device of the coordinate system conversion parameter of imaging sensor, comprising:
Initial value determining module is configured to obtain the location information of described image sensor to generate the coordinate system
The initial value of conversion parameter;
First set obtains module, is configured for the location information, obtains from three-dimensional map special for predetermined road
The first set of point in the three-dimensional system of coordinate of sign;
Second set obtains module, is configured for determining from the two dimensional image captured by described image sensor for described
The second set of point in the two-dimensional coordinate system of predetermined roadway characteristic;
Third set obtains module, is configured for the initial value, the point in the first set is projected to described
To obtain the third set of subpoint in two dimensional image;And
Parameter determination module, the matching being configured between the point in the third set and the second set are adjusted
The initial value is saved, with the determination coordinate system conversion parameter.
10. device according to claim 9, wherein obtaining the location information includes at least one of the following:
Obtain the location information of described image sensor;And
Obtain the directional information of described image sensor.
11. device according to claim 10, wherein being determined by the measurement of Handheld Global movable positioning system described
The location information of imaging sensor.
12. device according to claim 10, wherein determining the side of described image sensor by the measurement of bevel protractor
To information.
13. device according to claim 9, wherein the predetermined roadway characteristic includes at least one in following item: people
Trade, lane line, lane center, curb, traffic indicate facility.
14. device according to claim 9, wherein first set acquisition module includes:
Map acquiring unit is configured for obtaining the three-dimensional map;
Feature marks unit, is configured for the location information to mark the predetermined road in the three-dimensional map
Feature;And
First set determination unit is configured for the predetermined roadway characteristic of mark to determine the first set.
15. device according to claim 9, wherein second set acquisition module includes:
Image acquisition unit is configured for obtaining the two dimensional image captured by described image sensor;And
Second set determination unit is configured for determining from the two dimensional image for described in the predetermined roadway characteristic
Second set.
16. device according to any one of claims 9 to 15, wherein third set acquisition module includes:
Closest approach determination unit is configured for determining from the second set for each point in the third set
Closest approach corresponding with the point;
Error determination unit is configured for determining the re-projection error between the point and the corresponding closest approach;And
Unit is adjusted, is configured for being greater than or equal to the predetermined threshold in response to the re-projection error, is adjusted described first
Initial value, until the re-projection error is lower than the predetermined threshold.
17. a kind of electronic equipment, comprising:
One or more processors;And
Storage device, for storing one or more programs, when one or more of programs are by one or more of processing
Device executes, so that one or more of processors realize such as method of any of claims 1-8.
18. a kind of computer readable storage medium is stored thereon with computer program, realization when described program is executed by processor
Such as method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910424026.1A CN110135376A (en) | 2019-05-21 | 2019-05-21 | Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910424026.1A CN110135376A (en) | 2019-05-21 | 2019-05-21 | Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110135376A true CN110135376A (en) | 2019-08-16 |
Family
ID=67572090
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910424026.1A Pending CN110135376A (en) | 2019-05-21 | 2019-05-21 | Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110135376A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728720A (en) * | 2019-10-21 | 2020-01-24 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110751693A (en) * | 2019-10-21 | 2020-02-04 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110766760A (en) * | 2019-10-21 | 2020-02-07 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110766761A (en) * | 2019-10-21 | 2020-02-07 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110793544A (en) * | 2019-10-29 | 2020-02-14 | 北京百度网讯科技有限公司 | Sensing sensor parameter calibration method, device, equipment and storage medium |
CN111274296A (en) * | 2020-01-17 | 2020-06-12 | 北京无限光场科技有限公司 | Method and device for acquiring image data, terminal and storage medium |
CN111400537A (en) * | 2020-03-19 | 2020-07-10 | 北京百度网讯科技有限公司 | Road element information acquisition method and device and electronic equipment |
CN112348903A (en) * | 2021-01-06 | 2021-02-09 | 智道网联科技(北京)有限公司 | Method and device for calibrating external parameters of automobile data recorder and electronic equipment |
CN112561990A (en) * | 2021-01-21 | 2021-03-26 | 禾多科技(北京)有限公司 | Positioning information generation method, device, equipment and computer readable medium |
CN112560680A (en) * | 2020-12-16 | 2021-03-26 | 北京百度网讯科技有限公司 | Lane line processing method and device, electronic device and storage medium |
CN113838141A (en) * | 2021-09-02 | 2021-12-24 | 中南大学 | External parameter calibration method and system for single line laser radar and visible light camera |
CN114531580A (en) * | 2020-11-23 | 2022-05-24 | 北京四维图新科技股份有限公司 | Image processing method and device |
CN114863026A (en) * | 2022-05-18 | 2022-08-05 | 禾多科技(北京)有限公司 | Three-dimensional lane line information generation method, device, equipment and computer readable medium |
CN115201796A (en) * | 2022-07-26 | 2022-10-18 | 白犀牛智达(北京)科技有限公司 | External reference correction method for vehicle sensor |
CN115201796B (en) * | 2022-07-26 | 2024-04-30 | 白犀牛智达(北京)科技有限公司 | External reference correction method of vehicle sensor |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101975578A (en) * | 2010-09-20 | 2011-02-16 | 北京腾瑞万里科技有限公司 | Navigation method and device |
CN102123194A (en) * | 2010-10-15 | 2011-07-13 | 张哲颖 | Method for optimizing mobile navigation and man-machine interaction functions by using augmented reality technology |
US8446492B2 (en) * | 2009-12-10 | 2013-05-21 | Honda Motor Co., Ltd. | Image capturing device, method of searching for occlusion region, and program |
CN103810286A (en) * | 2014-02-25 | 2014-05-21 | 合肥亿图网络科技有限公司 | Coordinate point positioning method for matching two-dimensional map with three-dimensional map |
CN104833360A (en) * | 2014-02-08 | 2015-08-12 | 无锡维森智能传感技术有限公司 | Method for transforming two-dimensional coordinates into three-dimensional coordinates |
CN104978764A (en) * | 2014-04-10 | 2015-10-14 | 华为技术有限公司 | Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment |
CN105096386A (en) * | 2015-07-21 | 2015-11-25 | 中国民航大学 | Method for automatically generating geographic maps for large-range complex urban environment |
CN105809687A (en) * | 2016-03-08 | 2016-07-27 | 清华大学 | Monocular vision ranging method based on edge point information in image |
CN107133989A (en) * | 2017-06-12 | 2017-09-05 | 中国科学院长春光学精密机械与物理研究所 | A kind of 3 D scanning system parameter calibration method |
CN107464264A (en) * | 2016-06-02 | 2017-12-12 | 南京理工大学 | A kind of camera parameter scaling method based on GPS |
CN107679537A (en) * | 2017-05-09 | 2018-02-09 | 北京航空航天大学 | A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings |
CN108022265A (en) * | 2016-11-01 | 2018-05-11 | 狒特科技(北京)有限公司 | Infrared camera pose determines method, equipment and system |
CN108198217A (en) * | 2017-12-29 | 2018-06-22 | 百度在线网络技术(北京)有限公司 | Indoor orientation method, device, equipment and computer-readable medium |
CN108267747A (en) * | 2017-01-03 | 2018-07-10 | 中交宇科(北京)空间信息技术有限公司 | Road feature extraction method and apparatus based on laser point cloud |
CN108961422A (en) * | 2018-06-27 | 2018-12-07 | 百度在线网络技术(北京)有限公司 | The labeling method and device of threedimensional model |
CN109029450A (en) * | 2018-06-26 | 2018-12-18 | 重庆市勘测院 | A kind of indoor orientation method |
CN109215083A (en) * | 2017-07-06 | 2019-01-15 | 华为技术有限公司 | The method and apparatus of the calibrating external parameters of onboard sensor |
US20190113608A1 (en) * | 2017-10-12 | 2019-04-18 | Ford Global Technologies, Llc | Vehicle sensor operation |
CN109658504A (en) * | 2018-10-31 | 2019-04-19 | 百度在线网络技术(北京)有限公司 | Map datum mask method, device, equipment and storage medium |
CN109754432A (en) * | 2018-12-27 | 2019-05-14 | 深圳市瑞立视多媒体科技有限公司 | A kind of automatic camera calibration method and optics motion capture system |
-
2019
- 2019-05-21 CN CN201910424026.1A patent/CN110135376A/en active Pending
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8446492B2 (en) * | 2009-12-10 | 2013-05-21 | Honda Motor Co., Ltd. | Image capturing device, method of searching for occlusion region, and program |
CN101975578A (en) * | 2010-09-20 | 2011-02-16 | 北京腾瑞万里科技有限公司 | Navigation method and device |
CN102123194A (en) * | 2010-10-15 | 2011-07-13 | 张哲颖 | Method for optimizing mobile navigation and man-machine interaction functions by using augmented reality technology |
CN104833360A (en) * | 2014-02-08 | 2015-08-12 | 无锡维森智能传感技术有限公司 | Method for transforming two-dimensional coordinates into three-dimensional coordinates |
CN103810286A (en) * | 2014-02-25 | 2014-05-21 | 合肥亿图网络科技有限公司 | Coordinate point positioning method for matching two-dimensional map with three-dimensional map |
CN104978764A (en) * | 2014-04-10 | 2015-10-14 | 华为技术有限公司 | Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment |
CN105096386A (en) * | 2015-07-21 | 2015-11-25 | 中国民航大学 | Method for automatically generating geographic maps for large-range complex urban environment |
CN105809687A (en) * | 2016-03-08 | 2016-07-27 | 清华大学 | Monocular vision ranging method based on edge point information in image |
CN107464264A (en) * | 2016-06-02 | 2017-12-12 | 南京理工大学 | A kind of camera parameter scaling method based on GPS |
CN108022265A (en) * | 2016-11-01 | 2018-05-11 | 狒特科技(北京)有限公司 | Infrared camera pose determines method, equipment and system |
CN108267747A (en) * | 2017-01-03 | 2018-07-10 | 中交宇科(北京)空间信息技术有限公司 | Road feature extraction method and apparatus based on laser point cloud |
CN107679537A (en) * | 2017-05-09 | 2018-02-09 | 北京航空航天大学 | A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings |
CN107133989A (en) * | 2017-06-12 | 2017-09-05 | 中国科学院长春光学精密机械与物理研究所 | A kind of 3 D scanning system parameter calibration method |
CN109215083A (en) * | 2017-07-06 | 2019-01-15 | 华为技术有限公司 | The method and apparatus of the calibrating external parameters of onboard sensor |
US20190113608A1 (en) * | 2017-10-12 | 2019-04-18 | Ford Global Technologies, Llc | Vehicle sensor operation |
CN108198217A (en) * | 2017-12-29 | 2018-06-22 | 百度在线网络技术(北京)有限公司 | Indoor orientation method, device, equipment and computer-readable medium |
CN109029450A (en) * | 2018-06-26 | 2018-12-18 | 重庆市勘测院 | A kind of indoor orientation method |
CN108961422A (en) * | 2018-06-27 | 2018-12-07 | 百度在线网络技术(北京)有限公司 | The labeling method and device of threedimensional model |
CN109658504A (en) * | 2018-10-31 | 2019-04-19 | 百度在线网络技术(北京)有限公司 | Map datum mask method, device, equipment and storage medium |
CN109754432A (en) * | 2018-12-27 | 2019-05-14 | 深圳市瑞立视多媒体科技有限公司 | A kind of automatic camera calibration method and optics motion capture system |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110766760B (en) * | 2019-10-21 | 2022-08-02 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110751693A (en) * | 2019-10-21 | 2020-02-04 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110766760A (en) * | 2019-10-21 | 2020-02-07 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110766761A (en) * | 2019-10-21 | 2020-02-07 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110728720A (en) * | 2019-10-21 | 2020-01-24 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110728720B (en) * | 2019-10-21 | 2023-10-13 | 阿波罗智能技术(北京)有限公司 | Method, apparatus, device and storage medium for camera calibration |
CN110751693B (en) * | 2019-10-21 | 2023-10-13 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for camera calibration |
CN110766761B (en) * | 2019-10-21 | 2023-09-26 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for camera calibration |
CN110793544A (en) * | 2019-10-29 | 2020-02-14 | 北京百度网讯科技有限公司 | Sensing sensor parameter calibration method, device, equipment and storage medium |
CN110793544B (en) * | 2019-10-29 | 2021-12-14 | 北京百度网讯科技有限公司 | Method, device and equipment for calibrating parameters of roadside sensing sensor and storage medium |
CN111274296A (en) * | 2020-01-17 | 2020-06-12 | 北京无限光场科技有限公司 | Method and device for acquiring image data, terminal and storage medium |
CN111274296B (en) * | 2020-01-17 | 2024-03-01 | 北京有竹居网络技术有限公司 | Image data acquisition method and device, terminal and storage medium |
CN111400537A (en) * | 2020-03-19 | 2020-07-10 | 北京百度网讯科技有限公司 | Road element information acquisition method and device and electronic equipment |
CN111400537B (en) * | 2020-03-19 | 2023-04-28 | 北京百度网讯科技有限公司 | Road element information acquisition method and device and electronic equipment |
CN114531580A (en) * | 2020-11-23 | 2022-05-24 | 北京四维图新科技股份有限公司 | Image processing method and device |
CN114531580B (en) * | 2020-11-23 | 2023-11-21 | 北京四维图新科技股份有限公司 | Image processing method and device |
CN112560680A (en) * | 2020-12-16 | 2021-03-26 | 北京百度网讯科技有限公司 | Lane line processing method and device, electronic device and storage medium |
CN112348903A (en) * | 2021-01-06 | 2021-02-09 | 智道网联科技(北京)有限公司 | Method and device for calibrating external parameters of automobile data recorder and electronic equipment |
CN112561990A (en) * | 2021-01-21 | 2021-03-26 | 禾多科技(北京)有限公司 | Positioning information generation method, device, equipment and computer readable medium |
CN113838141B (en) * | 2021-09-02 | 2023-07-25 | 中南大学 | External parameter calibration method and system for single-line laser radar and visible light camera |
CN113838141A (en) * | 2021-09-02 | 2021-12-24 | 中南大学 | External parameter calibration method and system for single line laser radar and visible light camera |
CN114863026A (en) * | 2022-05-18 | 2022-08-05 | 禾多科技(北京)有限公司 | Three-dimensional lane line information generation method, device, equipment and computer readable medium |
CN115201796A (en) * | 2022-07-26 | 2022-10-18 | 白犀牛智达(北京)科技有限公司 | External reference correction method for vehicle sensor |
CN115201796B (en) * | 2022-07-26 | 2024-04-30 | 白犀牛智达(北京)科技有限公司 | External reference correction method of vehicle sensor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110135376A (en) | Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor | |
CN110174093B (en) | Positioning method, device, equipment and computer readable storage medium | |
CN110378965A (en) | Determine the method, apparatus, equipment and storage medium of coordinate system conversion parameter | |
US10339669B2 (en) | Method, apparatus, and system for a vertex-based evaluation of polygon similarity | |
CN108318043A (en) | Method, apparatus for updating electronic map and computer readable storage medium | |
CN110148185A (en) | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter | |
CN110146869A (en) | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter | |
CN108732589A (en) | The training data of Object identifying is used for using 3D LIDAR and positioning automatic collection | |
US11501104B2 (en) | Method, apparatus, and system for providing image labeling for cross view alignment | |
CN110386142A (en) | Pitch angle calibration method for automatic driving vehicle | |
US10291898B2 (en) | Method and apparatus for updating navigation map | |
US11263726B2 (en) | Method, apparatus, and system for task driven approaches to super resolution | |
CN110119698A (en) | For determining the method, apparatus, equipment and storage medium of Obj State | |
KR102200299B1 (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
CN111028358B (en) | Indoor environment augmented reality display method and device and terminal equipment | |
EP3671623B1 (en) | Method, apparatus, and computer program product for generating an overhead view of an environment from a perspective image | |
JP6950832B2 (en) | Position coordinate estimation device, position coordinate estimation method and program | |
US11232582B2 (en) | Visual localization using a three-dimensional model and image segmentation | |
US11170485B2 (en) | Method, apparatus, and system for automatic quality assessment of cross view feature correspondences using bundle adjustment techniques | |
EP3644013B1 (en) | Method, apparatus, and system for location correction based on feature point correspondence | |
US11055862B2 (en) | Method, apparatus, and system for generating feature correspondence between image views | |
Zhao et al. | Autonomous driving simulation for unmanned vehicles | |
CN111611918B (en) | Traffic flow data set acquisition and construction method based on aerial data and deep learning | |
CN110738105A (en) | method, device, system and storage medium for calculating urban street cell pedestrian flow based on deep learning | |
CN116433865B (en) | Space-ground collaborative acquisition path planning method based on scene reconstructability analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |