CN110148185A - Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter - Google Patents
Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter Download PDFInfo
- Publication number
- CN110148185A CN110148185A CN201910430855.0A CN201910430855A CN110148185A CN 110148185 A CN110148185 A CN 110148185A CN 201910430855 A CN201910430855 A CN 201910430855A CN 110148185 A CN110148185 A CN 110148185A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- imaging device
- conversion parameter
- initial value
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/60—Rotation of a whole image or part thereof
- G06T3/604—Rotation of a whole image or part thereof using a CORDIC [COordinate Rotation Digital Compute] device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Abstract
Embodiment of the disclosure provides method, apparatus, electronic equipment and the computer readable storage medium of a kind of coordinate system conversion parameter of determining imaging device.In the method, the initial value of the coordinate system conversion parameter of imaging device is obtained, coordinate system conversion parameter is used to be converted to world coordinate system the device coordinate system of imaging device.Obtain the reflected value map of the imaging region of imaging device, reflected value map has the axis of abscissas and axis of ordinates being overlapped with world coordinate system, its coordinate points records reflected intensity associated at least one reflection point in imaging region, at least one reflection point is formed by the object reflection detection light in imaging region and abscissa and ordinate having the same in world coordinate system.The initial value of coordinate system conversion parameter is updated, based on reflected value map to obtain the target value of coordinate system conversion parameter.Embodiment of the disclosure improves the flexibility and universality of the parameter calibration of imaging device.
Description
Technical field
Embodiment of the disclosure is generally related to the technical field of imaging device and automatic Pilot, and more particularly, relates to
And a kind of method, apparatus, electronic equipment and the computer readable storage medium of determining coordinate system conversion parameter.
Background technique
In recent years, the technologies such as automatic Pilot and autonomous parking are gradually shown up prominently, and the basis of these technologies is to vehicle
The perception of ambient enviroment, i.e., the specific situation of environment near identification vehicle.It has suggested that, in addition to vehicle-mounted (also referred to as " vehicle
Side ") sensor device (for example, mobile lidar, imaging device etc.) except, can also by vehicle outside (also referred to as " road
Side ") sensor device (for example, being mounted on the imaging device in both sides of the road or parking lot) obtain the correlation of vehicle environmental
Data, so as to the automatic Pilot for preferably supporting vehicle or independently parking.The vehicle due to automatic Pilot or independently to stop is usual
It is all with world coordinate system (for example, Universal Trans Meridian UTM coordinate system) with reference to being positioned, so in order to realize to certainly
The support for driving or independently stopping is moved, the imaging device outside vehicle needs to carry out the calibration of outer parameter first, that is, determining that the world is sat
Conversion parameter between mark system and the camera coordinates system of imaging device.
Currently, the external parameters calibration of vehicle-mounted imaging device is usually to pass through between calibration mobile lidar and imaging device
Relationship realize that and the outer imaging device of vehicle can pass through in the covering of global positioning system (GPS) signal
The calibration of outer parameter is completed based on the measurement of GPS signal.However, in some scenes of such as underground parking, tunnel etc.,
May there is no GPS signal, be not provided with laser radar sensor yet, therefore, it is difficult to realize to the outer of the imaging device in the scene
Parameter calibration.
Summary of the invention
Embodiment of the disclosure is related to a kind of technical solution of the coordinate system conversion parameter of determining imaging device.
In the disclosure in a first aspect, providing a kind of method of the coordinate system conversion parameter of determining imaging device.The party
Method includes: the initial value for obtaining the coordinate system conversion parameter of imaging device, and coordinate system conversion parameter is used to turn world coordinate system
It is changed to the device coordinate system of imaging device.This method further include: obtain the reflected value map of the imaging region of imaging device, reflect
Value map has the axis of abscissas and axis of ordinates being overlapped with world coordinate system, the coordinate points record of reflected value map and imaging area
The associated reflected intensity of at least one reflection point in domain, at least one reflection point reflect detection by the object in imaging region
Light and formed and abscissa and ordinate having the same in world coordinate system.This method further comprises: based on reflection
Value map updates the initial value of coordinate system conversion parameter, to obtain the target value of coordinate system conversion parameter.
In the second aspect of the disclosure, a kind of device of the coordinate system conversion parameter of determining imaging device is provided.The dress
Setting includes: that initial value obtains module, is configured as obtaining the initial value of the coordinate system conversion parameter of imaging device, coordinate system conversion
Parameter is used to be converted to world coordinate system the device coordinate system of imaging device.The device further include: reflected value map obtains mould
Block is configured as obtaining the reflected value map of the imaging region of imaging device, and reflected value map has to be overlapped with world coordinate system
Axis of abscissas and axis of ordinates, reflected value map coordinate points record it is associated at least one reflection point in imaging region
Reflected intensity, at least one reflection point formed and by the object reflection detection light in imaging region in world coordinate system
In abscissa and ordinate having the same.The device further comprises: initial value update module, is configured as based on reflected value
Map updates the initial value of coordinate system conversion parameter, to obtain the target value of coordinate system conversion parameter.
In the third aspect of the disclosure, a kind of electronic equipment is provided.The electronic equipment includes one or more processors;
And storage device, for storing one or more programs.When one or more programs are executed by one or more processors,
So that the method that one or more processors realize first aspect.
In the fourth aspect of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with,
The method of first aspect is realized when the computer program is executed by processor.
It should be appreciated that content described in Summary be not intended to limit embodiment of the disclosure key or
Important feature, it is also non-for limiting the scope of the present disclosure.Other features of the disclosure will become easy reason by description below
Solution.
Detailed description of the invention
The following detailed description is read with reference to the accompanying drawings, above-mentioned and other purposes, the feature of embodiment of the disclosure
It will be easy to understand with advantage.In the accompanying drawings, several implementations of the disclosure are shown by way of example rather than limitation
Example, in which:
Fig. 1 shows some embodiments of the present disclosure can be in the schematic diagram for the example context wherein realized;
Fig. 2 shows the exemplary methods of the coordinate system conversion parameter of determining imaging device according to an embodiment of the present disclosure
Schematic flow chart;
Fig. 3 shows showing for the exemplary method of the initial value according to an embodiment of the present disclosure for obtaining coordinate system conversion parameter
Meaning property flow chart;
Fig. 4 shows showing for the exemplary method of the initial value according to an embodiment of the present disclosure for updating coordinate system conversion parameter
Meaning property flow chart;
Fig. 5 shows difference of the determining set point according to an embodiment of the present disclosure between projected image and capture image
Exemplary method schematic flow chart;
Fig. 6 shows the signal of the device of the coordinate system conversion parameter of determining imaging device according to an embodiment of the present disclosure
Property block diagram;And
Fig. 7 shows a kind of schematic block diagram of equipment that can be used to implement embodiment of the disclosure.
Through all attached drawings, same or similar reference label is used to represent same or similar component.
Specific embodiment
Several exemplary embodiments shown in below with reference to the accompanying drawings describe the principle and spirit of the disclosure.It should
Understand, describes these specific embodiments merely to enabling those skilled in the art to more fully understand and realizing this public affairs
It opens, and not limits the scope of the present disclosure in any way.
As used in this article, term " coordinate system conversion parameter " for example can be camera coordinates system, image coordinate system,
It carries out converting required parameter, such as translation matrix, spin matrix, etc. between pixel coordinate system and world coordinate system.?
In the context of the disclosure, world coordinate system can refer to the reference coordinate system of range covering the whole world, such as can be used for
The automatic Pilot for assisting vehicle or independently parking etc., example includes UTM coordinate system, latitude and longitude coordinates system, etc..Camera
The origin of coordinate system can be located at the optical center of imaging device, and vertical pivot (z-axis) can be with the optical axis coincidence of imaging device, horizontal axis (x
Axis) and the longitudinal axis (y-axis) can be parallel with imaging plane.In the context of the disclosure, camera coordinates system is referred to as being imaged
Device coordinate system or referred to as device coordinate system.The origin of pixel coordinate system horizontally and vertically can may be used in the upper left corner of image
To be respectively pixel column and pixel column where image, unit can be pixel.The origin of image coordinate system can be in image
Center (i.e. the midpoint of pixel coordinate system), horizontally and vertically parallel with pixel coordinate system, unit can be millimeter.But it will manage
Solution, in other examples, these coordinate systems can also be determined according to other reasonable manners received in the art
Justice.
In embodiment of the disclosure, " coordinate system conversion parameter " may include or refer to so-called in camera calibration field
" outer ginseng ", " outer parameter ", " external parameter ", " joining matrix outside ", etc..It is set in general, " outer parameter " can refer to specific imaging
Conversion parameter between standby associated camera coordinates system and world coordinate system (for example, UTM coordinate system)." external parameters calibration " can
To refer to the determination to the conversion parameter between camera coordinates system and world coordinate system.Therefore, retouching in embodiment of the disclosure
In stating, for convenience, term " outer parameter " can be replaced with term " coordinate system conversion parameter ".
As noted above, in some scenes of such as underground parking, tunnel etc., may there is no GPS signal, not have yet
There is setting laser radar sensor, therefore, it is difficult to realize the external parameters calibration to the imaging device in the scene.However, only existing
After the outer ginseng for obtaining imaging device, the automatic Pilot or independently parking that imaging device can just be preferably applied to auxiliary vehicle,
Such as execute the algorithm, etc. that monocular vision returns to three-dimensional (3D).Therefore, it is necessary to a kind of outer parameters that it applies more widely
Scaling method, to obtain the transformational relation between the camera coordinates system of imaging device and world coordinate system.
One is proposed in view of the above problem present in traditional scheme and potential other problems, embodiment of the disclosure
Kind determines method, apparatus, electronic equipment and the computer readable storage medium of the coordinate system conversion parameter of imaging device, to provide
A kind of external parameters calibration method that it applies more widely.Although should be noted that embodiment of the disclosure is suitable for not
There is the scene of GPS signal and laser radar sensor, but embodiment of the disclosure is also not limited to such scene, but wait
Together suitable for there are the scenes of GPS signal and laser radar sensor.
Traditional scaling method of outer parameter relative to imaging device, embodiment of the disclosure is realized to be believed in no GPS
Number and field end laser radar sensor under conditions of, obtain the conversion between the camera coordinates system of imaging device and world coordinate system
Outer ginseng.Embodiment of the disclosure calibration easy to operate is high-efficient, and mean pixel error can be less than or equal to 2 pixels, is trackside
Perception provides precision guarantee.Further, since embodiment of the disclosure is independent of positioning signal and radar sensor, therefore mention
The high flexibility and universality of the parameter calibration of imaging device.Several embodiments of the disclosure are described with reference to the accompanying drawing.
Fig. 1 shows some embodiments of the present disclosure can be in the schematic diagram for the example context 100 wherein realized.Such as Fig. 1
Shown, example context 100 schematically depicts the scene in some parking lot.Specifically, in discribed parking lot
Multiple parking stalls are provided with, for example, the parking stall " CW185 " indicated by parking stall number 108.In addition, on the ground in parking lot also
Drafting has lane line 101, guide symbol 104, parking stall line 106, etc..It should be appreciated that these facilities and mark depicted in figure 1
Knowledge is only example, will there is different facility and mark or other facility or mark, the disclosure in other parking lots
Embodiment be not limited in this respect.It is to be further understood that embodiment of the disclosure is not limited to the discribed parking of Fig. 1
The scene of field, but associated scene of generally stopping suitable for any with automatic Pilot or independently.More generally, this public affairs
The embodiment opened is also applied for the imaging device of any purposes, and the imaging for being not limited to auxiliary automatic Pilot or independently stopping is set
It is standby.
In the example of fig. 1, multiple vehicle 110-1 to 110-5 (collectively referred to hereinafter as vehicle 110) are just being parked in corresponding
On parking stall.Vehicle 110, which can be, can carry people and/or object and any type mobile by dynamical systems such as engines
Vehicle, including but not limited to car, truck, bus, electric vehicle, motorcycle, caravan, train, etc..In example context 100
One or more vehicles 110 can be the vehicle with automatic Pilot or autonomous stop capability, such vehicle is also referred to as
Automatic driving vehicle.Certainly, one in example context 100 or some vehicles 110 be also possible to do not have automatic Pilot or from
The vehicle of main stop capability.
It there also is provided imaging device (also referred to as imaging sensor) 105 in example context 100, for capturing in example context
Image.In the context of the disclosure, imaging device generically refers to any equipment with imaging function, including vehicle-mounted
Imaging device and vehicle outside imaging device and imaging device for any purpose.Such imaging device includes but not
Be limited to camera, video camera, video camera, camera, monitoring probe, automobile data recorder, with taking pictures or the movement of camera function
Equipment, etc..In some embodiments, imaging device 105 can be independently of vehicle 110, for monitoring the shape of example context 100
Condition is to obtain perception information relevant to example context 100, come the automatic Pilot for assisting vehicle 110 or independently parking.In order to subtract
It blocks less, the higher position in example context 100 can be set in imaging device 105.For example, being arranged in fixed link or wall
Higher position, preferably to monitor example context 100.
Although illustrating only an imaging device 105 in Fig. 1, it will be understood that, in each region of example context 100
Multiple imaging devices can be disposed with.In some embodiments, other than the imaging device 105 for being fixed on specific position, show
Removable or rotatable imaging device, etc. can also be set in example environment 100.In addition, although imaging device in Fig. 1
105 are depicted as that the outside of vehicle 110 is arranged in, it will be understood that, embodiment of the disclosure is also equally applicable to setting and exists
Imaging device on vehicle 110, i.e., vehicle-mounted imaging device.As shown, imaging device 105 communicatedly (for example, wiredly or
It wirelessly) is connected to and calculates equipment 120.To imaging device 105 execute external parameters calibration when, the position of imaging device 105 and/
Or orientation information and institute's captured image data can be provided to and calculate equipment 120, to determine imaging device 105
It is used when coordinate system conversion parameter.In addition, various control signals can also be sent to imaging device 105 to control by calculating equipment 120
Be formed as the various operations of equipment 105, such as control the capture of imaging device 105 image, moved or rotated, etc..
It will be understood that calculating equipment 120 can be any kind of mobile terminal, fixed terminal or portable terminal, including
Mobile phone, website, unit, equipment, multimedia computer, multimedia plate, internet node, communicator, desktop computer,
Laptop computer, notebook computer, netbook computer, tablet computer, PCS Personal Communications System (PCS) equipment, individual
Navigation equipment, personal digital assistant (PDA), audio/video player, digital camera/video camera, positioning device, television reception
Device, radio broadcast receiver, electronic book equipment, game station or any combination thereof, accessory including these equipment and outer
If or any combination thereof.Any kind of interface for user can be supported (all it is also contemplated that calculating equipment 120
Such as " wearable " circuit).More generally, calculating equipment 120 can be used for determining the coordinate system conversion of imaging device
Any server or client device of parameter.Determining imaging according to an embodiment of the present disclosure is described below with reference to Fig. 2 to set
The exemplary method of standby coordinate system conversion parameter.
Fig. 2 shows the example sides of the coordinate system conversion parameter of determining imaging device 105 according to an embodiment of the present disclosure
The schematic flow chart of method 200.In some embodiments, exemplary method 200 can be realized by the calculating equipment 120 in Fig. 1,
Such as it can be realized by the processor or processing unit of calculating equipment 120.In other embodiments, exemplary method 200 is complete
Portion or part can also be by realizing independently of the calculating equipment of example context 100, or can be by its in example context 100
His unit is realized.For the ease of discussing, exemplary method 200 will be described in conjunction with Fig. 1.
As mentioned above, the imaging device 105 in example context 100 can be used for assisting vehicle 110 in parking lot
It is independently stopped or automatic Pilot.More generally, in the automatic Pilot scene on traffic route, the imaging device outside vehicle can
Automatic Pilot is carried out similarly to assist vehicle.Because vehicle 110 is usually independently to be stopped with reference to world coordinate system
Or automatic Pilot, so the autonomous parking or automatic Pilot, imaging device 105 in order to assist vehicle 110 may need to demarcate
Coordinate system conversion parameter between outer parameter, namely the camera coordinates system and world coordinate system of determining imaging device 105.Pass through institute
Determining coordinate system conversion parameter, calculating equipment 120 can be used for the image information that imaging device 105 is captured to assist vehicle
110 carry out automatic Pilots or autonomous parking.
Therefore, at 210, the initial value that equipment 120 obtains the coordinate system conversion parameter of imaging device 105 is calculated.As above
What text was pointed out, the outer parameter of imaging device 105 between the camera coordinates system and world coordinate system of imaging device 105 for turning
It changes, so the outer parameter of imaging device 105 can be calculated by position of the imaging device 105 in world coordinate system and orientation
It obtains.In other words, in order to obtain imaging device 105 outer parameter initial value, calculate equipment 120 can determine first be imaged set
Standby 105 position and orientation in world coordinate system.Hereafter the determination of position and orientation is described respectively.
Calculating equipment 120 can be used any suitable method to determine position of the imaging device 105 in world coordinate system,
Namely the coordinate in world coordinate system.For example, calculate equipment 120 it is available by handhold GPS device measuring determine at
As the location information of equipment 105.In another example calculating the available imaging device determining by the measurement of total station of equipment 120
105 location information.In some embodiments, the imaging device 105 of the available user of equipment 120 input or estimation is calculated
Location information.It is obtained it will be appreciated that the location information of imaging device 105 can be by other any suitable equipments or method
, embodiment of the disclosure does not do any restrictions to this.In addition, it should be understood that the location information of imaging device 105 can be
Any suitable form suitable for coordinate system conversion parameter.The location information of acquisition imaging device 105 is described below in conjunction with Fig. 3
A kind of instantiation procedure.
Fig. 3 shows the exemplary method 300 of the initial value according to an embodiment of the present disclosure for obtaining coordinate system conversion parameter
Schematic flow chart.In some embodiments, exemplary method 300 can be realized by the calculating equipment 120 in Fig. 1, such as
It can be realized by the processor or processing unit of calculating equipment 120.In other embodiments, the whole of exemplary method 300 or
Part can also be by realizing independently of the calculating equipment of example context 100, or can be by other lists in example context 100
Member is realized.By exemplary method 300, the initial value of the translation vector of the outer parameter of imaging device 105 can be counted effectively
It calculates, irrespective of whether there are GPS signals.
At 310, the reference coordinate of the reference point in world coordinate system can be obtained by calculating equipment 120, which can
To be any point in world coordinate system convenient for determining its coordinate.Since imaging device 105 can possibly be provided at no GPS signal
Region, it is possible to the coordinate of above-mentioned reference point is determined at the region for having GPS signal.And imaging device 105 and above-mentioned ginseng
Positional relationship between examination point can use such as total station to measure, it will be understood that, being also using other measuring tools can
Capable.In some embodiments, the origin of total station can be arranged at the reference point.In other words, it can measure to obtain complete
It stands coordinate of the origin in world coordinate system of instrument, which can be expressed as a.
At 320, the optical center of imaging device 105 and the positional relationship of reference point can be obtained by calculating equipment 120.As above
It points out, the optical center of imaging device 105 is arranged in the origin of the camera coordinates system of imaging device 105, so imaging device 105
Coordinate of the optical center in world coordinate system can determine the translation vector in the outer parameter of imaging device 105.For example, using complete
In the example of instrument of standing, total station areal survey since origin can be used, until measuring the optical center of imaging device 105.Often
The measurement transformation relation of section can be expressed as Ti, can finally calculate the coordinate of the optical center of imaging device 105: ACenter=Ti*…*
T1*a。
At 330, calculate equipment 120 can reference coordinate based on reference point and its with the optical center of imaging device 105 it
Between positional relationship, to determine the initial value of the translation vector in coordinate system conversion parameter.The optical center of imaging device 105 is phase
The origin of machine coordinate system, because the translation vector of furthermore parameter can be by the optical center of imaging device 105 in world coordinate system
Translation between coordinate and the origin of world coordinate system determines.Therefore, the position of optical center and reference point based on imaging device 105
The coordinate of relationship and reference point in world coordinate system is set, calculates equipment 120 it can be concluded that the optical center of imaging device 105 exists
Coordinate in world coordinate system, and then obtain the initial value of the translation vector of the outer parameter of imaging device 105.
Further include spin matrix in the outer parameter of imaging device 105 other than translation vector, characterizes imaging device
Rotation relationship between 105 camera coordinates system and world coordinate system.Therefore, in order to obtain the outer parameter of imaging device 105
Initial value calculates equipment 120 also it needs to be determined that the initial value of spin matrix.In some embodiments, calculating equipment 120 can obtain
Imaging device 105 and due east direction, direct north and sky direction angulation are obtained, to obtain the spin matrix in outer parameter
Initial value.For example, can be measured by bevel protractor since above-mentioned angle does not require high-precision to complete.Pass through
Which, the initial value of the spin matrix of the outer parameter of imaging device 105, which can be simply measured, to be obtained, without using it
His sophisticated sensors.
It will be appreciated that direction information can also be obtained by other any suitable equipments or method, the reality of the disclosure
It applies example and does not do any restrictions to this.In addition, direction information can be any suitable shape suitable for coordinate system conversion parameter
Formula.After the initial value of initial value and spin matrix that translation vector has been determined, calculates equipment 120 and obtain imaging device
The initial value of 105 coordinate system conversion parameter.
Referring back to Fig. 2, at 220, the reflected value map that equipment 120 obtains the imaging region of imaging device 105 is calculated.
In the context of the disclosure, reflected value map, which refers to, is formed by reflection point to the reflection of detection light with object or object
The map for reflecting value information, can be made by laser radar point cloud.Specifically, the laser of laser radar transmitting is through air
Ground or body surface are traveled to, is reflected using ground or surface, the energy value that laser is reflected can be recorded.Cause
This, each point in laser radar point cloud is in addition to there is location information (for example, other than abscissa x, ordinate y and ordinate z), to go back
There is the strength information (i can be expressed as) of reflected value.The production of reflected value map can x, y, z based on cloud, i tetra- changes
Amount.
Therefore, the reflected value map of the imaging region of imaging device 105 can use laser radar in imaging device 105
The object acquired in imaging region makes the reflection point cloud of laser.For example, being equipped with the collecting vehicle of laser radar can adopt
Collect the point cloud of the imaging region of imaging device 105.It is then possible to using synchronous positioning and build figure (SLAM) method according to original
Laser point cloud making point cloud map.Then, the information that Cong Dianyun map removes vertical pivot (z-axis) dimension becomes two dimensional image, should
The abscissa of two dimensional image is the abscissa x of point cloud, and the ordinate of two dimensional image is the ordinate y of point cloud, and the two dimensional image
Pixel content be point cloud reflected value intensity i, the two dimensional image, that is, reflected value map.
It can be seen that reflected value map has the axis of abscissas and axis of ordinates that are overlapped with world coordinate system, and reflected value
The coordinate points of map record reflected intensity associated with one or more reflection points in the imaging region of imaging device 105,
The one or more reflection point is formed by the object reflection detection light in imaging region and has phase in world coordinate system
Same abscissa and ordinate.In other words, with multiple reflection points of identical abscissa and ordinate in laser point cloud map
The same coordinate point in reflected value map is likely corresponded to, and the reflected value of the coordinate points can be for example corresponding to above-mentioned multiple anti-
The sum of reflected value of exit point.
Be described below it is several make the exemplary method of reflected value map from laser point cloud map, wherein will assume be by
The calculating equipment 120 of exemplary method 200 is executed to make reflected value map.However, it should be understood that in other embodiments
In, reflected value map can calculate equipment by others and make, and calculate equipment 120 can use completed it is anti-
Value map is penetrated to execute exemplary method 200.
As the first example production method, calculating equipment 120 can be corresponding from reflected value map to be built first
The laser point cloud for constructing reflected value map is selected in region in collected laser point cloud, and from for constructing reflection
Sample frame laser point cloud is selected in the laser point cloud of value map.Then, calculating equipment 120 can be from sample frame laser point cloud
Key frame laser point cloud is selected, and is based on the corresponding adjustment amount of key frame laser point cloud, determines optimal key frame laser
Point cloud.Adjustment amount is based on the center that key frame laser point cloud is spliced to the corresponding laser radar of other key frame laser point clouds
Point spliced position, the amount of movement of the position of the central point relative to the corresponding laser radar of key frame laser point cloud and it is true
It is fixed.
Then, calculate equipment 120 can in the laser point cloud for constructing reflected value map remove optimal key frame laser
Laser point cloud except point cloud carries out global pose optimization, and it is corresponding to obtain laser point cloud of each frame for constructing reflected value map
Laser radar central point for constructing position and the attitude angle of reflected value map.Finally, calculating equipment 120 can be based on
The central point of laser point cloud corresponding laser radar of each frame for constructing reflected value map is used to construct reflected value map
Position and attitude angle, construct reflected value map.
As another example production method, calculating equipment 120 can be first respectively from reflected value map pair to be built
The laser for constructing reflected value map is selected in the pickup area of each of region answered in collected laser point cloud
Point cloud, and sample is selected from the laser point cloud for constructing reflected value map collected in each pickup area respectively
This frame laser point cloud.Then, calculating equipment 120 can be respectively from the collected sample frame laser point cloud of each pickup area
Key frame laser point cloud is selected, and is based respectively on corresponding in the collected key frame laser point cloud of each pickup area
Adjustment amount determines the collected optimal key frame laser point cloud of each pickup area.Adjustment amount is based on key frame laser
Point cloud to the corresponding laser radar of others key frame laser point clouds central point spliced position relative to key
The amount of movement of the position of the central point of the corresponding laser radar of frame laser point cloud and determine.
Then, calculating equipment 120 can be collected to each pickup area for constructing reflected value map respectively
Laser point cloud in laser point cloud in addition to key frame laser point cloud carries out global pose optimization, obtains each pickup area and adopts
The each frame collected is used to construct reflecting for constructing for the central point of the corresponding laser radar of laser point cloud of reflected value map
It is worth position and the attitude angle of map.Finally, calculating equipment 120 can be used based on each frame collected in each pickup area
In building reflected value map the corresponding laser radar of laser point cloud central point for construct reflected value map position and
Attitude angle constructs reflected value map.It will be understood that calculating equipment 120 other than exemplary production method provided herein and may be used also
To make reflected value map from laser point cloud map in any suitable manner.
At 230, after obtaining the initial value and reflected value map of outer parameter of imaging device 105, equipment 120 is calculated
The initial value of coordinate system conversion parameter is updated, based on reflected value map to obtain the target value of coordinate system conversion parameter.For example,
The initial value of the outer parameter obtained by measurement means may be inaccurate, it is therefore desirable to optimised (or improving accuracy)
So that imaging device 105 is preferably applied to auxiliary automatic Pilot or autonomous parking etc..Described above, in reflected value map
Record has coordinate of the reflection point in world coordinate system in the imaging region of imaging device 105.Therefore, in reflected value map
Coordinate information can be used for the initial value of the outer parameter of optimal imaging equipment 105, thus the target of the initial value for the parameter that must go out
Value.The coordinate information in reflected value map is used to optimize outside it will be understood that calculating equipment 120 and any mode appropriate can be used
The initial value of parameter, for example, the coordinate in reflected value map is related to the pixel in 105 captured images of imaging device
Connection.A kind of example of initial value optimization process is described below in conjunction with Fig. 4.
Fig. 4 shows the exemplary method 400 of the initial value according to an embodiment of the present disclosure for updating coordinate system conversion parameter
Schematic flow chart.In some embodiments, exemplary method 400 can be realized by the calculating equipment 120 in Fig. 1, such as
It can be realized by the processor or processing unit of calculating equipment 120.In other embodiments, the whole of exemplary method 400 or
Part can also be by realizing independently of the calculating equipment of example context 100, or can be by other lists in example context 100
Member is realized.By exemplary method 400, the outer parameter of imaging device 105 can be adjusted more accurate, so that imaging
Equipment 105 can be used for assisting automatic Pilot or autonomous parking.
At 410, the initial value of outer parameter of the equipment 120 based on imaging device 105 is calculated, reflected value map projection is arrived
In the pixel coordinate system of imaging device 105, to obtain the first image of the imaging region of imaging device 105.As it was noted above, at
As three of the camera coordinates system from the three-dimensional coordinate of world coordinate system to imaging device 105 may be implemented in the outer parameter of equipment 105
Tie up the conversion between coordinate.It further, can be by camera coordinates using the intrinsic parameter and/or other parameters of imaging device 105
The three-dimensional coordinate of system is transformed into the two-dimensional pixel coordinate in the pixel coordinate system of imaging device 105.It should be noted that in the disclosure
In context, the determination of the outer parameter due to focusing mainly on imaging device 105, the intrinsic parameter of imaging device 105 or its
His parameter assume that be known.
Therefore, by means of the initial value of the outer parameter obtained before this, reflected value map projection can be arrived by calculating equipment 120
The pixel coordinate system of imaging device 105, in order to be compared with 105 captured images of imaging device, to judge outer parameter
It is whether accurate enough.As it was noted above, the coordinate points of reflected value map have the abscissa and ordinate in world coordinate system.?
During projection, coordinate points missing third dimension (z-axis) coordinate can by the abscissa and ordinate of coordinate points come
It is counter to look into.However, identical abscissa and ordinate likely correspond to multiple ordinates, therefore ordinate in reflected value map
It possibly can not be uniquely determined by abscissa and ordinate.In this respect, it is noted that imaging device 105, which is typically mounted on, to be shown
Higher position in example environment 100, therefore 105 captured image of imaging device may be considered a kind of top view.In this feelings
It, can be multiple perpendicular in the case where having multiple ordinates corresponding with specific abscissa and feature ordinate under condition
The smallest ordinate is chosen in coordinate as ordinate value corresponding with the abscissa and the ordinate.
At 420, the second image that equipment 120 captures its imaging region by imaging device 105 is calculated, for use in
The first image obtained with reflected value map projection is compared or is matched, and whether the initial value to judge outer parameter smart enough
Really.In some embodiments, which can be carried out by the corresponding pixel points in the first image and the second image.Therefore,
At 430, first pixel of the set point in the imaging region of the determining imaging device 105 of equipment 120 in the first image is calculated
Coordinate and the difference between the second pixel coordinate in the second image.As noted previously, as the outer parameter of imaging device 105
Initial value may be simultaneously inaccurate, therefore project obtained first image for reflected value map according to the initial value may be with
There are deviations between second image of 105 Direct Acquisition of imaging device.The deviation can be by set point in both images
Difference between pixel coordinate is measured.
For example, the set point can be the marginal point or boundary point of the object in the imaging region of imaging device 105,
That is, with the point of significant difference compared with ambient enviroment, the side of the marginal point of the column in such as parking lot, road sign on road
Edge point, etc..More generally, which is also possible to any point with prominent features in imaging region.In some realities
It applies in example, the set of characteristic point can be extracted to calculate above-mentioned difference in the first image and the second image respectively by calculating equipment 120
It is different, it is possible thereby to improve the accuracy of calculating.This is specifically described below with reference to Fig. 5.
Fig. 5 shows difference of the determining set point according to an embodiment of the present disclosure between projected image and capture image
Exemplary method 500 schematic flow chart.In some embodiments, exemplary method 500 can be by the calculating equipment in Fig. 1
120 realize, such as can be realized by the processor or processing unit of calculating equipment 120.In other embodiments, example side
The all or part of method 500 can also be by realizing independently of the calculating equipment of example context 100, or can be by example context
Other units in 100 are realized.By exemplary method 500, the difference between projected image and capture image can be more acurrate
Ground is determined, to improve the accuracy of the value of the outer parameter finally determined.
At 510, fisrt feature point set can be extracted from the first image by calculating equipment 120.Calculating equipment 120 can be with
Make to extract fisrt feature point set from the first image in any suitable manner.In some embodiments, calculating equipment 120 can
To be identified in the first image by scheduled Feature point recognition condition (for example, the difference with surrounding pixel point is more than threshold value)
Fisrt feature point set.In other embodiments, calculation can be extracted and be matched by existing image characteristic point by calculating equipment 120
Method extracts fisrt feature point set, such as ORB algorithm or SIFT algorithm, etc..
At 520, the second spy corresponding with fisrt feature point set can be extracted from the second image by calculating equipment 120
Levy point set.Calculating equipment 120 can be used any mode appropriate and extract from the second image and fisrt feature point set phase
Corresponding second feature point set.For example, can directly the second image and the second picture registration be compared by calculating equipment 120
It is right, so that it is determined that fisrt feature point set corresponding second feature point set in the second image out.In another example calculating equipment
120 can identify the second feature point set in the second image by identical Feature point recognition condition, thus identify that
Set of characteristic points can consider that with fisrt feature point set be corresponding.For another example calculating equipment 120 can be used existing figure
Second feature point set corresponding with fisrt feature point set is determined as extracting and matching feature points algorithm, such as ORB is calculated
Method or SIFT algorithm, etc..
At 530, calculate equipment 120 can calculate the corresponding points in fisrt feature point set and second feature point set it
Between pixel distance summation.The summation can reflect out the deviation size between in the first image and the second image, further
It can reflect out the accuracy of the initial value of the outer parameter of imaging device 105.In some embodiments, in order to obtain above-mentioned pixel
The summation of distance gives characteristic point for each of fisrt feature point set, and calculating equipment 120 can be in second feature point set
Determining and given characteristic point is apart from nearest character pair point in conjunction, namely determines feature corresponding with given characteristic point
Point.Then, the pixel distance between given characteristic point and character pair point can be calculated by calculating equipment 120, which can
To be indicated by pixel coordinate.Then, calculating equipment 120 can be by the pixel distance phase between all matching characteristic points pair
Add, to obtain the summation of pixel distance.In this way, the character pair point in the second image can be efficiently and accurate
Ground is determined, to reduce the calculation amount of the summation of pixel distance.
Referring back to Fig. 4, at 440, first pixel coordinate and second pixel coordinate of the equipment 120 based on set point are calculated
Between difference, to adjust the initial value of coordinate system conversion parameter.The coordinate of coordinate points in reflected value map may be considered that
It is accurately, in the sufficiently accurate situation of initial value of the outer parameter of imaging device 105, then to project to obtain based on the initial value
The first image and the second image of 105 Direct Acquisition of imaging device between should about the pixel coordinate difference of identical set point
It is sufficiently small.Therefore, in some embodiments, if the difference is greater than preset configurable threshold, mean outer parameter
Initial value it is not accurate enough.In this case, the initial value of the outer parameter of the adjustable imaging device 105 of equipment 120 is calculated,
Again reflected value image is projected to obtain the first image, so that the difference becomes smaller.In other words, calculating equipment 120 can be with
It is adjusted using the value of the external parameter of the mode of iteration, until above-mentioned difference is less than preset configurable threshold.
For example, in the above-described example for extracting set of characteristic points, if calculating equipment 120 is calculated first
The summation of the pixel distance between corresponding points in set of characteristic points and second feature point set is less than preset configurable threshold,
This means that the value (value may be multiple from initial value iteration adjustment) of the outer parameter of imaging device 105 is enough
Accurately, then calculate equipment 120 can determine imaging device 105 coordinate system conversion parameter adjusted value as coordinate system
The target value of conversion parameter, that is, finally obtaining the value of the outer parameter of imaging device 105.
In order to which the matching process of features described above point set is better described, hereafter by by way of mathematical operation to its into
Row description.It should be appreciated that following mathematical description is merely exemplary, it is not intended to limit the scope of the present disclosure in any way.?
In other embodiments, other mathematical description mode is can be used to describe in the above process.
Specifically, the first image and imaging that can first respectively reflected value map projection be obtained by calculating equipment 120 are set
Second image of standby 105 capture carries out feature point extraction, extracts fisrt feature point set and the second feature in two images
Point set.For example, the first and second set of characteristic points can correspond respectively to the angle that object is imaged in the first and second images
Point and the apparent profile information of feature, etc..Assuming that fisrt feature point set is expressed as P, therefore can be sought in the second image
The point nearest from projection point set P is looked for constitute second feature point set p.In addition, in following derivation, by imaging device 105
Outer ginseng in spin matrix be expressed as R, translation vector is expressed as t.
Then, two set of characteristic points (being referred to as a cloud), i.e. fisrt feature point set can be defined by calculating equipment 120
The error term e between P and second feature point set p is closed, can be indicated by following formula (1):
E=p- (RP+t) (1)
Then, least square problem can be constructed by calculating equipment 120, and solve the pixel pair so that set of characteristic points
Error sum of squares reach minimum R and t, this can be indicated by following formula (2), wherein i indicate ith feature point, n
Indicate the number of characteristic point:
It will be understood that least square method is only a kind of example, it is in other embodiments, any existing with similar functions
Other mathematical methods of exploitation can be used for calculating here with future.In order to solve above-mentioned optimization problem, calculating equipment 120 can
To define the mass center of two groups of set of characteristic points first, this is indicated by following formula (3).
Therefore, the objective function that formula (2) indicates can be converted to following formula (4):
Assuming that pi- p=q, Pi- P=Q, since the Section 2 in above formula (4) can be 0, then under objective function can become
The formula (5) in face:
First item in above formula (5) is unrelated with R, Section 2 RTR=I is also unrelated with R.Therefore, majorized function can
To become following formula (6):
Then, calculating equipment 120 can defineThen svd can be carried out to W to decompose to obtain W
=U ∑T, then R=VUT;T=p-RP, the outer parameter matrix for thus acquiring imaging device 105 can be expressed as following formula
(7)。
Error term e is recalculated with the T acquired later, if e is less than preset configurable threshold or the number of iterations reaches
It to preset threshold number, then can stop iteration, otherwise can be projected again with the outer ginseng T newly acquired and obtain the first image,
Then closest approach set is calculated in the second image again, i.e. second feature point set p starts next iteration.
Fig. 6 shows the device 600 of the coordinate system conversion parameter of determining imaging device according to an embodiment of the present disclosure
Schematic block diagram.In some embodiments, device 600 can be included in the calculating equipment 120 of Fig. 1 or be implemented as to count
Calculate equipment 120.
As shown in fig. 6, device 600 includes that initial value obtains module 610, reflected value map obtains module 620 and initial value
Update module 630.Initial value obtains module 610 and is configured as obtaining the initial value of the coordinate system conversion parameter of imaging device, sits
Mark system conversion parameter is used to be converted to world coordinate system the device coordinate system of imaging device.Reflected value map obtains module 620
It is configured as obtaining the reflected value map of the imaging region of imaging device, reflected value map has the cross being overlapped with world coordinate system
The coordinate points record of reference axis and axis of ordinates, reflected value map is associated at least one reflection point in imaging region anti-
Intensity is penetrated, at least one reflection point is formed by the object reflection detection light in imaging region and had in world coordinate system
There are identical abscissa and ordinate.Initial value update module 630 is configured as updating coordinate system turn based on reflected value map
The initial value of parameter is changed, to obtain the target value of coordinate system conversion parameter.
In some embodiments, initial value update module 630 may include that projection module, trapping module, difference determine mould
Block and initial value adjust module.Projection module is configured as based on initial value, by the picture of reflected value map projection to imaging device
In plain coordinate system, to obtain the first image of imaging region.Trapping module is configured as capturing imaging area by imaging device
Second image in domain.Difference determining module is configured to determine that first pixel of the set point in imaging region in the first image
Coordinate and the difference between the second pixel coordinate in the second image.Initial value adjustment module is configured as adjusting based on difference
The initial value of whole coordinate system conversion parameter.
In some embodiments, difference determining module may further include the first extraction module, the second extraction module and
Summation computing module.First extraction module is configured as extracting fisrt feature point set from the first image.Second extraction module
It is configured as extracting second feature point set corresponding with fisrt feature point set from the second image.Summation computing module quilt
It is configured to calculate the summation of the pixel distance between the corresponding points in fisrt feature point set and second feature point set.
In some embodiments, characteristic point is given for each of fisrt feature point set, summation computing module can be with
It further comprise character pair point determining module and pixel distance computing module.Character pair point determining module is configured as
Determining and given characteristic point is apart from nearest character pair point in two set of characteristic points.Pixel distance computing module is configured as counting
Calculate the pixel distance between given characteristic point and character pair point.
In some embodiments, device 600 can also include target value determining module.Target value determining module is configured as
It is less than threshold value in response to the summation of pixel distance, determines the value of coordinate system conversion parameter adjusted as target value.
In some embodiments, it may include that reference coordinate obtains module, positional relationship obtains that initial value, which obtains module 610,
Module and translation vector initial value determining module.Reference coordinate obtains module and is configured as obtaining the reference point in world coordinate system
Reference coordinate.Positional relationship obtains module and is configured as obtaining the optical center of imaging device and the positional relationship of reference point.Translation
Vector initial value determining module is configured as based on reference coordinate and positional relationship, to determine the translation in coordinate system conversion parameter
The initial value of vector.
In some embodiments, it may include spin matrix initial value determining module that initial value, which obtains module 610,.Spin moment
Battle array initial value determining module is configured as obtaining imaging device and due east direction, direct north and sky direction angulation,
To obtain the initial value of the spin matrix in coordinate system conversion parameter.
In some embodiments, it may include reflected value cartography module that reflected value map, which obtains module 620,.Reflected value
Cartography module is configured as using laser radar collected reflection point cloud in imaging region, come with making reflected value
Figure.
Fig. 7 schematically shows a kind of block diagram of equipment 700 that can be used to implement embodiment of the disclosure.Such as figure
Shown in 7, equipment 700 includes central processing unit (CPU) 701, can be according to being stored in read only memory devices (ROM)
Computer program instructions in 702 are loaded into the calculating in random access memory device (RAM) 703 from storage unit 708
Machine program instruction, to execute various movements appropriate and processing.In RAM 703, can also store equipment 700 operate it is required each
Kind program and data.CPU 701, ROM 702 and RAM 703 are connected with each other by bus 704.Input/output (I/O) interface
705 are also connected to bus 704.
Multiple components in equipment 700 are connected to I/O interface 705, comprising: input unit 706, such as keyboard, mouse etc.;
Output unit 707, such as various types of displays, loudspeaker etc.;Storage unit 708, such as disk, CD etc.;And it is logical
Believe unit 709, such as network interface card, modem, wireless communication transceiver etc..Communication unit 709 allows equipment 700 by such as
The computer network of internet and/or various telecommunication networks exchange information/data with other equipment.
Each process as described above and processing, such as exemplary method 200,300,400 and 500 can be by processing unit
701 execute.For example, in some embodiments, exemplary method 200,300,400 and 500 can be implemented as computer software journey
Sequence is tangibly embodied in machine readable media, such as storage unit 708.In some embodiments, the portion of computer program
Divide or all can be loaded into and/or be installed in equipment 700 via ROM 702 and/or communication unit 709.Work as calculating
When machine program is loaded into RAM 703 and is executed by CPU 701, example described above method 200,300,400 can be executed
With 500 one or more steps.
As it is used herein, term " includes " and its similar term should be understood as that opening includes, i.e., " including but not
It is limited to ".Term "based" should be understood as " being based at least partially on ".Term " one embodiment " or " embodiment " should manage
Solution is " at least one embodiment ".Term " first ", " second " etc. may refer to different or identical object.May be used also herein
It can include other specific and implicit definition.
As it is used herein, term " determination " covers various movements.For example, " determination " may include operation,
It calculates, processing, export, investigation, searches (for example, searching in table, database or another data structure), finds out.In addition,
" determination " may include receiving (for example, receiving information), access (for example, data in access memory) etc..In addition, " determination "
It may include parsing, selection, selection, foundation etc..
It should be noted that embodiment of the disclosure can be realized by the combination of hardware, software or software and hardware.Firmly
Part part can use special logic to realize;Software section can store in memory, by instruction execution system appropriate,
Such as microprocessor or special designs hardware execute.It will be appreciated by those skilled in the art that above-mentioned device and method can
It is realized with using computer executable instructions and/or being included in the processor control code, such as in programmable memory
Or such code is provided in the data medium of such as optics or electrical signal carrier.
In addition, although describing the operation of disclosed method in the accompanying drawings with particular order, this do not require that or
Person implies must execute these operations in this particular order, or has to carry out operation shown in whole and be just able to achieve expectation
Result.On the contrary, the step of describing in flow chart can change and execute sequence.Additionally or alternatively, it is convenient to omit Mou Xiebu
Suddenly, multiple step groups are combined into a step to execute, and/or a step is decomposed into execution of multiple steps.It shall also be noted that
It can be embodied in one apparatus according to the feature and function of two or more devices of the disclosure.Conversely, above-described
The feature and function of one device can be to be embodied by multiple devices with further division.
Although describing the disclosure by reference to several specific embodiments, but it is to be understood that it is public that the present disclosure is not limited to institutes
The specific embodiment opened.The disclosure is intended to cover in spirit and scope of the appended claims included various modifications and equivalent
Arrangement.
Claims (18)
1. a kind of method of the coordinate system conversion parameter of determining imaging device, comprising:
The initial value of the coordinate system conversion parameter of the imaging device is obtained, the coordinate system conversion parameter is used for the world
Coordinate system is converted to the device coordinate system of the imaging device;
The reflected value map of the imaging region of the imaging device is obtained, the reflected value map has and the world coordinate system
The axis of abscissas and axis of ordinates of coincidence, at least one of the coordinate points record of the reflected value map and the imaging region
The associated reflected intensity of reflection point, at least one described reflection point shape by the object reflection detection light in the imaging region
At and abscissa and ordinate having the same in the world coordinate system;And
The initial value of the coordinate system conversion parameter is updated based on the reflected value map, is turned with obtaining the coordinate system
Change the target value of parameter.
2. according to the method described in claim 1, wherein updating the initial value and including:
Based on the initial value, by the pixel coordinate system of the reflected value map projection to the imaging device, to obtain
State the first image of imaging region;
The second image of the imaging region is captured by the imaging device;
Determine first pixel coordinate of the set point in the imaging region in the first image and in second image
In the second pixel coordinate between difference;And
The initial value of the coordinate system conversion parameter is adjusted based on the difference.
3. according to the method described in claim 2, wherein determining that the difference includes:
Fisrt feature point set is extracted from the first image;
Second feature point set corresponding with the fisrt feature point set is extracted from second image;And
Calculate the summation of the pixel distance between the corresponding points in the fisrt feature point set and the second feature point set.
4. according to the method described in claim 3, wherein calculating the summation and including:
Characteristic point is given for each of described fisrt feature point set,
The determining and given characteristic point is apart from nearest character pair point in the second feature point set;And
Calculate the pixel distance between the given characteristic point and character pair point.
5. according to the method described in claim 3, further include:
It is less than threshold value in response to the summation of the pixel distance, determines the value of the coordinate system conversion parameter adjusted as institute
State target value.
6. according to the method described in claim 1, the initial value for wherein obtaining the coordinate system conversion parameter includes:
Obtain the reference coordinate of the reference point in the world coordinate system;
Obtain the optical center of the imaging device and the positional relationship of the reference point;And
Based on the reference coordinate and the positional relationship, to determine the initial of the translation vector in the coordinate system conversion parameter
Value.
7. according to the method described in claim 1, the initial value for wherein obtaining the coordinate system conversion parameter includes:
The imaging device and due east direction, direct north and sky direction angulation are obtained, to obtain the coordinate system
The initial value of spin matrix in conversion parameter.
8. according to the method described in claim 1, wherein obtaining the reflected value map and including:
Using laser radar in the imaging region collected reflection point cloud, to make the reflected value map.
9. a kind of device of the coordinate system conversion parameter of determining imaging device, comprising:
Initial value obtains module, is configured as obtaining the initial value of the coordinate system conversion parameter of the imaging device, described
Coordinate system conversion parameter is used to be converted to world coordinate system the device coordinate system of the imaging device;
Reflected value map obtains module, is configured as obtaining the reflected value map of the imaging region of the imaging device, described anti-
Penetrating value map has the axis of abscissas and axis of ordinates being overlapped with the world coordinate system, the coordinate points note of the reflected value map
Record associated at least one reflection point in imaging region reflected intensity, at least one described reflection point by it is described at
It is formed as the object reflection detection light in region and abscissa having the same and vertical is sat in the world coordinate system
Mark;And
Initial value update module is configured as updating the described first of the coordinate system conversion parameter based on the reflected value map
Initial value, to obtain the target value of the coordinate system conversion parameter.
10. device according to claim 9, wherein the initial value update module includes:
Projection module is configured as based on the initial value, by the pixel of the reflected value map projection to the imaging device
In coordinate system, to obtain the first image of the imaging region;
Trapping module is configured as capturing the second image of the imaging region by the imaging device;
Difference determining module is configured to determine that first pixel of the set point in the imaging region in the first image
Coordinate and the difference between the second pixel coordinate in second image;And
Initial value adjusts module, is configured as adjusting the initial value of the coordinate system conversion parameter based on the difference.
11. device according to claim 10, wherein the difference determining module includes:
First extraction module is configured as extracting fisrt feature point set from the first image;
Second extraction module is configured as extracting and the fisrt feature point set corresponding second from second image
Set of characteristic points;And
Summation computing module is configured as calculating the corresponding points in the fisrt feature point set and the second feature point set
Between pixel distance summation.
12. device according to claim 11, wherein for the given characteristic point of each of described fisrt feature point set,
The summation computing module includes:
Character pair point determining module is configured as in the second feature point set the determining and given characteristic point distance
Nearest character pair point;And
Pixel distance computing module, be configured as calculating pixel between the given characteristic point and character pair point away from
From.
13. device according to claim 11, further includes:
Target value determining module, the summation for being configured to respond to the pixel distance are less than threshold value, determine adjusted described
The value of coordinate system conversion parameter is as the target value.
14. device according to claim 9, wherein initial value acquisition module includes:
Reference coordinate obtains module, is configured as obtaining the reference coordinate of the reference point in the world coordinate system;
Positional relationship obtains module, is configured as obtaining the positional relationship of the optical center of the imaging device and the reference point;With
And
Translation vector initial value determining module is configured as based on the reference coordinate and the positional relationship, described to determine
The initial value of translation vector in coordinate system conversion parameter.
15. device according to claim 9, wherein initial value acquisition module includes:
Spin matrix initial value determining module is configured as obtaining the imaging device and due east direction, direct north and sky
Direction angulation, to obtain the initial value of the spin matrix in the coordinate system conversion parameter.
16. device according to claim 9, wherein reflected value map acquisition module includes:
Reflected value cartography module is configured as using laser radar collected reflection point cloud in the imaging region,
To make the reflected value map.
17. a kind of electronic equipment, comprising:
One or more processors;And
Storage device, for storing one or more programs, when one or more of programs are by one or more of processing
When device executes, so that one or more of processors realize such as method of any of claims 1-8.
18. a kind of computer readable storage medium is stored thereon with computer program, realization when described program is executed by processor
Such as method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910430855.0A CN110148185B (en) | 2019-05-22 | 2019-05-22 | Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910430855.0A CN110148185B (en) | 2019-05-22 | 2019-05-22 | Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110148185A true CN110148185A (en) | 2019-08-20 |
CN110148185B CN110148185B (en) | 2022-04-15 |
Family
ID=67592801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910430855.0A Active CN110148185B (en) | 2019-05-22 | 2019-05-22 | Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110148185B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110561428A (en) * | 2019-08-23 | 2019-12-13 | 大族激光科技产业集团股份有限公司 | method, device and system for determining pose of robot base coordinate system and readable medium |
CN110673115A (en) * | 2019-09-25 | 2020-01-10 | 杭州飞步科技有限公司 | Combined calibration method, device, equipment and medium for radar and integrated navigation system |
CN110728720A (en) * | 2019-10-21 | 2020-01-24 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110751693A (en) * | 2019-10-21 | 2020-02-04 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110766761A (en) * | 2019-10-21 | 2020-02-07 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110926453A (en) * | 2019-11-05 | 2020-03-27 | 杭州博信智联科技有限公司 | Obstacle positioning method and system |
CN111388092A (en) * | 2020-03-17 | 2020-07-10 | 京东方科技集团股份有限公司 | Positioning tracking piece, registration method, storage medium and electronic equipment |
CN111553956A (en) * | 2020-05-20 | 2020-08-18 | 北京百度网讯科技有限公司 | Calibration method and device of shooting device, electronic equipment and storage medium |
CN111667545A (en) * | 2020-05-07 | 2020-09-15 | 东软睿驰汽车技术(沈阳)有限公司 | High-precision map generation method and device, electronic equipment and storage medium |
CN111680596A (en) * | 2020-05-29 | 2020-09-18 | 北京百度网讯科技有限公司 | Positioning truth value verification method, device, equipment and medium based on deep learning |
CN111680685A (en) * | 2020-04-14 | 2020-09-18 | 上海高仙自动化科技发展有限公司 | Image-based positioning method and device, electronic equipment and storage medium |
CN113091889A (en) * | 2021-02-20 | 2021-07-09 | 周春伟 | Method and device for measuring road brightness |
CN113284194A (en) * | 2021-06-22 | 2021-08-20 | 智道网联科技(北京)有限公司 | Calibration method, device and equipment for multiple RS (remote sensing) equipment |
CN113311422A (en) * | 2020-02-27 | 2021-08-27 | 富士通株式会社 | Coordinate conversion method and device and data processing equipment |
CN113722796A (en) * | 2021-08-29 | 2021-11-30 | 中国长江电力股份有限公司 | Poor texture tunnel modeling method based on vision-laser radar coupling |
CN114266876A (en) * | 2021-11-30 | 2022-04-01 | 北京百度网讯科技有限公司 | Positioning method, visual map generation method and device |
WO2022227553A1 (en) * | 2021-04-30 | 2022-11-03 | 成都完美时空网络技术有限公司 | Reflection effect generation method and apparatus, storage medium, and computer device |
CN117593385A (en) * | 2023-11-28 | 2024-02-23 | 广州赋安数字科技有限公司 | Method for generating camera calibration data in auxiliary mode through image spots |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982548A (en) * | 2012-12-11 | 2013-03-20 | 清华大学 | Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof |
CN103727930A (en) * | 2013-12-30 | 2014-04-16 | 浙江大学 | Edge-matching-based relative pose calibration method of laser range finder and camera |
CN103871071A (en) * | 2014-04-08 | 2014-06-18 | 北京经纬恒润科技有限公司 | Method for camera external reference calibration for panoramic parking system |
US20160267661A1 (en) * | 2015-03-10 | 2016-09-15 | Fujitsu Limited | Coordinate-conversion-parameter determination apparatus, coordinate-conversion-parameter determination method, and non-transitory computer readable recording medium having therein program for coordinate-conversion-parameter determination |
CN107025670A (en) * | 2017-03-23 | 2017-08-08 | 华中科技大学 | A kind of telecentricity camera calibration method |
CN107368790A (en) * | 2017-06-27 | 2017-11-21 | 上海汇纳信息科技股份有限公司 | Pedestrian detection method, system, computer-readable recording medium and electronic equipment |
CN107492123A (en) * | 2017-07-07 | 2017-12-19 | 长安大学 | A kind of road monitoring camera self-calibrating method using information of road surface |
CN107564069A (en) * | 2017-09-04 | 2018-01-09 | 北京京东尚科信息技术有限公司 | The determination method, apparatus and computer-readable recording medium of calibrating parameters |
CN108694882A (en) * | 2017-04-11 | 2018-10-23 | 百度在线网络技术(北京)有限公司 | Method, apparatus and equipment for marking map |
CN108732582A (en) * | 2017-04-20 | 2018-11-02 | 百度在线网络技术(北京)有限公司 | Vehicle positioning method and device |
US20180356831A1 (en) * | 2017-06-13 | 2018-12-13 | TuSimple | Sparse image point correspondences generation and correspondences refinement method for ground truth static scene sparse flow generation |
CN109215083A (en) * | 2017-07-06 | 2019-01-15 | 华为技术有限公司 | The method and apparatus of the calibrating external parameters of onboard sensor |
CN109410735A (en) * | 2017-08-15 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Reflected value map constructing method and device |
CN109523597A (en) * | 2017-09-18 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | The scaling method and device of Camera extrinsic |
CN109754432A (en) * | 2018-12-27 | 2019-05-14 | 深圳市瑞立视多媒体科技有限公司 | A kind of automatic camera calibration method and optics motion capture system |
-
2019
- 2019-05-22 CN CN201910430855.0A patent/CN110148185B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982548A (en) * | 2012-12-11 | 2013-03-20 | 清华大学 | Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof |
CN103727930A (en) * | 2013-12-30 | 2014-04-16 | 浙江大学 | Edge-matching-based relative pose calibration method of laser range finder and camera |
CN103871071A (en) * | 2014-04-08 | 2014-06-18 | 北京经纬恒润科技有限公司 | Method for camera external reference calibration for panoramic parking system |
US20160267661A1 (en) * | 2015-03-10 | 2016-09-15 | Fujitsu Limited | Coordinate-conversion-parameter determination apparatus, coordinate-conversion-parameter determination method, and non-transitory computer readable recording medium having therein program for coordinate-conversion-parameter determination |
CN107025670A (en) * | 2017-03-23 | 2017-08-08 | 华中科技大学 | A kind of telecentricity camera calibration method |
CN108694882A (en) * | 2017-04-11 | 2018-10-23 | 百度在线网络技术(北京)有限公司 | Method, apparatus and equipment for marking map |
CN108732582A (en) * | 2017-04-20 | 2018-11-02 | 百度在线网络技术(北京)有限公司 | Vehicle positioning method and device |
US20180356831A1 (en) * | 2017-06-13 | 2018-12-13 | TuSimple | Sparse image point correspondences generation and correspondences refinement method for ground truth static scene sparse flow generation |
CN107368790A (en) * | 2017-06-27 | 2017-11-21 | 上海汇纳信息科技股份有限公司 | Pedestrian detection method, system, computer-readable recording medium and electronic equipment |
CN109215083A (en) * | 2017-07-06 | 2019-01-15 | 华为技术有限公司 | The method and apparatus of the calibrating external parameters of onboard sensor |
CN107492123A (en) * | 2017-07-07 | 2017-12-19 | 长安大学 | A kind of road monitoring camera self-calibrating method using information of road surface |
CN109410735A (en) * | 2017-08-15 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Reflected value map constructing method and device |
CN107564069A (en) * | 2017-09-04 | 2018-01-09 | 北京京东尚科信息技术有限公司 | The determination method, apparatus and computer-readable recording medium of calibrating parameters |
CN109523597A (en) * | 2017-09-18 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | The scaling method and device of Camera extrinsic |
CN109754432A (en) * | 2018-12-27 | 2019-05-14 | 深圳市瑞立视多媒体科技有限公司 | A kind of automatic camera calibration method and optics motion capture system |
Non-Patent Citations (4)
Title |
---|
HUANG LIN 等: "《Research on multi-camera calibration and point cloud correction method based on three-dimensional calibration object》", 《OPTICS AND LASERS IN ENGINEERING》 * |
付生鹏 等: "基于环形镜面的相机外部参数自动标定方法", 《机器人》 * |
夏鹏飞 等: "基于最大互信息的激光雷达与相机的配准", 《仪器仪表学报》 * |
程金龙等: "车载激光雷达外参数的标定方法", 《光电工程》 * |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110561428A (en) * | 2019-08-23 | 2019-12-13 | 大族激光科技产业集团股份有限公司 | method, device and system for determining pose of robot base coordinate system and readable medium |
CN110561428B (en) * | 2019-08-23 | 2023-01-24 | 大族激光科技产业集团股份有限公司 | Method, device and equipment for determining pose of robot base coordinate system and readable medium |
CN110673115B (en) * | 2019-09-25 | 2021-11-23 | 杭州飞步科技有限公司 | Combined calibration method, device, equipment and medium for radar and integrated navigation system |
CN110673115A (en) * | 2019-09-25 | 2020-01-10 | 杭州飞步科技有限公司 | Combined calibration method, device, equipment and medium for radar and integrated navigation system |
CN110766761B (en) * | 2019-10-21 | 2023-09-26 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for camera calibration |
CN110766761A (en) * | 2019-10-21 | 2020-02-07 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110751693A (en) * | 2019-10-21 | 2020-02-04 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110751693B (en) * | 2019-10-21 | 2023-10-13 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for camera calibration |
CN110728720A (en) * | 2019-10-21 | 2020-01-24 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
CN110728720B (en) * | 2019-10-21 | 2023-10-13 | 阿波罗智能技术(北京)有限公司 | Method, apparatus, device and storage medium for camera calibration |
CN110926453A (en) * | 2019-11-05 | 2020-03-27 | 杭州博信智联科技有限公司 | Obstacle positioning method and system |
CN113311422A (en) * | 2020-02-27 | 2021-08-27 | 富士通株式会社 | Coordinate conversion method and device and data processing equipment |
CN111388092A (en) * | 2020-03-17 | 2020-07-10 | 京东方科技集团股份有限公司 | Positioning tracking piece, registration method, storage medium and electronic equipment |
CN111680685A (en) * | 2020-04-14 | 2020-09-18 | 上海高仙自动化科技发展有限公司 | Image-based positioning method and device, electronic equipment and storage medium |
CN111680685B (en) * | 2020-04-14 | 2023-06-06 | 上海高仙自动化科技发展有限公司 | Positioning method and device based on image, electronic equipment and storage medium |
CN111667545A (en) * | 2020-05-07 | 2020-09-15 | 东软睿驰汽车技术(沈阳)有限公司 | High-precision map generation method and device, electronic equipment and storage medium |
CN111667545B (en) * | 2020-05-07 | 2024-02-27 | 东软睿驰汽车技术(沈阳)有限公司 | High-precision map generation method and device, electronic equipment and storage medium |
CN111553956A (en) * | 2020-05-20 | 2020-08-18 | 北京百度网讯科技有限公司 | Calibration method and device of shooting device, electronic equipment and storage medium |
CN111680596A (en) * | 2020-05-29 | 2020-09-18 | 北京百度网讯科技有限公司 | Positioning truth value verification method, device, equipment and medium based on deep learning |
CN111680596B (en) * | 2020-05-29 | 2023-10-13 | 北京百度网讯科技有限公司 | Positioning true value verification method, device, equipment and medium based on deep learning |
CN113091889A (en) * | 2021-02-20 | 2021-07-09 | 周春伟 | Method and device for measuring road brightness |
WO2022227553A1 (en) * | 2021-04-30 | 2022-11-03 | 成都完美时空网络技术有限公司 | Reflection effect generation method and apparatus, storage medium, and computer device |
CN113284194A (en) * | 2021-06-22 | 2021-08-20 | 智道网联科技(北京)有限公司 | Calibration method, device and equipment for multiple RS (remote sensing) equipment |
CN113722796A (en) * | 2021-08-29 | 2021-11-30 | 中国长江电力股份有限公司 | Poor texture tunnel modeling method based on vision-laser radar coupling |
CN113722796B (en) * | 2021-08-29 | 2023-07-18 | 中国长江电力股份有限公司 | Vision-laser radar coupling-based lean texture tunnel modeling method |
CN114266876A (en) * | 2021-11-30 | 2022-04-01 | 北京百度网讯科技有限公司 | Positioning method, visual map generation method and device |
CN117593385A (en) * | 2023-11-28 | 2024-02-23 | 广州赋安数字科技有限公司 | Method for generating camera calibration data in auxiliary mode through image spots |
CN117593385B (en) * | 2023-11-28 | 2024-04-19 | 广州赋安数字科技有限公司 | Method for generating camera calibration data in auxiliary mode through image spots |
Also Published As
Publication number | Publication date |
---|---|
CN110148185B (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110148185A (en) | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter | |
CN110146869A (en) | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter | |
US11176701B2 (en) | Position estimation system and position estimation method | |
US20190370565A1 (en) | Method and apparatus for extracting lane line and computer readable storage medium | |
JP4232167B1 (en) | Object identification device, object identification method, and object identification program | |
JP2019179021A (en) | Method and apparatus for creating map and positioning moving entity | |
US11395100B2 (en) | Indoor positioning method, indoor positioning system, indoor positioning apparatus and computer readable medium | |
CN110378965A (en) | Determine the method, apparatus, equipment and storage medium of coordinate system conversion parameter | |
CN111028358B (en) | Indoor environment augmented reality display method and device and terminal equipment | |
CN110119698A (en) | For determining the method, apparatus, equipment and storage medium of Obj State | |
CN113989450B (en) | Image processing method, device, electronic equipment and medium | |
US10872246B2 (en) | Vehicle lane detection system | |
WO2019144876A1 (en) | Pickup Service Based on Recognition between Vehicle and Passenger | |
CN108028883A (en) | Image processing apparatus, image processing method and program | |
CN109883433B (en) | Vehicle positioning method in structured environment based on 360-degree panoramic view | |
CN111932627B (en) | Marker drawing method and system | |
CN108801225B (en) | Unmanned aerial vehicle oblique image positioning method, system, medium and equipment | |
Guo et al. | Urban Geospatial Information Acquisition Mobile Mapping System based on close-range photogrammetry and IGS site calibration | |
CN103557834A (en) | Dual-camera-based solid positioning method | |
KR100981588B1 (en) | A system for generating geographical information of city facilities based on vector transformation which uses magnitude and direction information of feature point | |
CN115345944A (en) | Method and device for determining external parameter calibration parameters, computer equipment and storage medium | |
Antigny et al. | Hybrid visual and inertial position and orientation estimation based on known urban 3D models | |
CN116007637B (en) | Positioning device, method, in-vehicle apparatus, vehicle, and computer program product | |
CN115790441B (en) | Municipal part data extraction method and system | |
CN111667531B (en) | Positioning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |