CN107133925B - Spectrum image correction method for automatically extracting control points - Google Patents
Spectrum image correction method for automatically extracting control points Download PDFInfo
- Publication number
- CN107133925B CN107133925B CN201710235471.4A CN201710235471A CN107133925B CN 107133925 B CN107133925 B CN 107133925B CN 201710235471 A CN201710235471 A CN 201710235471A CN 107133925 B CN107133925 B CN 107133925B
- Authority
- CN
- China
- Prior art keywords
- image
- edge line
- actual
- alpha
- tan
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000003702 image correction Methods 0.000 title claims abstract description 11
- 238000001228 spectrum Methods 0.000 title description 7
- 238000003384 imaging method Methods 0.000 claims abstract description 40
- 230000003595 spectral effect Effects 0.000 claims abstract description 22
- 238000012937 correction Methods 0.000 claims abstract description 9
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000011160 research Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012892 rational function Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention relates to a spectral image correction method for automatically extracting control points, which comprises the following operation steps: acquiring spectral images with different object distances, no shooting angles and shooting angles; the relationship between the actual imaging size and the object distance under different object distances; calculating the distances from the lens to the center point of the image, the upper edge line and the midpoint of the lower edge line; calculating the actual height from the midpoint of the upper edge line and the lower edge line of the image to the center point of the image; calculating the actual imaging lengths of an upper edge line, a lower edge line and a horizontal central axis of the image; converting the actual height and the length into the number of pixels according to the calculated actual height and length; and controlling the acquisition of the point pairs and the correction of the image. According to the technical scheme, under the condition that actual control point coordinates do not exist, 9 groups of control point coordinates in the image are calculated and obtained by utilizing the shooting object distance and the lens shooting angle, and finally, the image is corrected by utilizing a polynomial, so that the distorted image can be quickly and conveniently recovered.
Description
Technical Field
The invention relates to the geometry of the field of spectral images, in particular to a spectral image correction method for automatically extracting control points.
Background
The imaging spectrometer is a spectrum instrument integrating spectra, can obtain images and spectral information of objects by one-time scanning, and is widely applied to researches in the aspects of crop growth monitoring, fruit detection, forest disaster and the like at present. Although many scholars have achieved great achievements in the application of imaging spectrometers, researches on the extraction of tree measuring factors (tree height, crown, breast diameter), spectral feature analysis and the like of single standing trees have not been reported.
In the process of shooting an actual single standing tree by using an imaging spectrometer, in order to completely shoot the whole standing tree, the shooting object distance and the shooting angle of a lens can be properly adjusted, so that the obtained image can have geometric distortion, which has important influence on the extraction of tree measurement factors, and if the image obtained by scanning the spectrum imager is not subjected to image correction, a great error can be caused to the actual result.
Many researchers have studied methods for correcting image distortion, and the methods mainly include polynomial models, rational function models, reference image-based registration methods, collinear equation methods, projection transformation methods, and the like. All of them require definite ground control points, reference images and the like to finish the correction of the images, but the images obtained by shooting through the imaging spectrometer have neither definite control points nor definite reference images. How to rapidly acquire the image control point to carry out distortion correction on the image control point through shooting object distance and lens shooting angle is a problem to be solved by the research. Therefore, the invention is a spectral image correction method for automatically acquiring control points based on two factors of the shot object distance and the shot angle of the lens.
Disclosure of Invention
The invention aims to provide a spectral image correction method for automatically extracting control points, which can solve the problem of correcting trapezoidal distortion caused by shooting elevation angles under the condition that the control points cannot be acquired.
In order to achieve the purpose, the invention adopts the following technical scheme:
a spectral image correction method for automatically extracting control points is characterized by comprising the following operation steps:
s1: acquiring spectral images with different object distances, no shooting angles and shooting angles;
s2: the relationship between the actual imaging size and the object distance under different object distances;
s3: calculating the distances from the lens to the center point of the image, the upper edge line and the midpoint of the lower edge line;
s4: calculating the actual height from the midpoint of the upper edge line and the lower edge line of the image to the center point of the image;
s5: calculating the actual imaging lengths of an upper edge line, a lower edge line and a horizontal central axis of the image;
s6: converting the actual height and the length calculated in S4 and S5 into the number of pixels;
s7: and controlling the acquisition of the point pairs and the correction of the image.
The further scheme is as follows:
the specific operation of step S2 is: according to the shot object distance, establishing a linear relation between the shot object distance and the actual imaging size:
fun(di,dfactleni),dfactleni=di*0.5164-0.7303,R2=1
where fun () is the linear relationship between the subject distance and the actual imaging size, diFor taking object distances, dfactleniThe actual length of the horizontal line passing through the center of the image.
The specific operation of step S3 is:
s31: selecting an image with a lens shooting angle and good quality;
s32: calculating the distances from the lens to the center point of the image, the middle points of the upper edge line and the lower edge line according to the shooting object distance and the shooting angle of the lens, wherein the formulas are as follows:
QO'=QO/cosα=d/cosα
QB'=QO/cos(α+13°)=d/cos(α+13°)
when alpha is more than 13 degrees, QA' QO/cos (alpha-13 degrees) d/cos (alpha-13 degrees)
When alpha is less than 13 degrees, QA' QO/cos (13-alpha) d/cos (13-alpha)
In the formula, QO ' is the distance from the lens to the center point of the image, QB ' is the distance from the lens to the midpoint of the upper edge of the image, QA ' is the distance from the lens to the midpoint of the lower edge of the image, d is the object distance to be shot, and alpha is the shooting angle of the lens.
In step S4: the calculation formula of the actual height from the midpoint of the upper edge line and the lower edge line of the image to the center point of the image is as follows:
B'O'=B'O–OO'=QO*tan(α+13°)–QO*tanα=d*tan(α+13°)–d*tanα
when α >13 °, a ' O ' -a ' O ═ QO ═ tan α -QO ═ tan (α -13 °), d ═ tan α -d ═ tan (α -13 °)
When alpha is less than 13 deg., A ' O ' -A ' O-QO tan alpha-QO tan (13-alpha) d tan alpha-d tan (13-alpha)
In the formula, B 'O' is the actual height from the midpoint of the upper edge of the image to the center point of the image, a 'O' is the actual height from the midpoint of the lower edge of the image to the center point of the image, d is the object distance, and α is the lens angle.
The step S5 specifically includes:
the relationship fun (d) between the object distance and the actual imaging size established in step S2i,dfactleni) And respectively calculating the actual lengths of the upper edge line, the lower edge line and the horizontal central axis of the image, wherein the formula is as follows:
dMidfactX=QO'*0.5164–0.7303=d/cosα*0.5164–0.7303
dUpfactX=QB'*0.5164–0.7303=d/cos(α+13°)*0.5164–0.7303
when alpha is more than 13 degrees, dDownfactX QA' 0.5164-0.7303 d/cos (alpha-13 degrees) 0.5164-0.7303
When alpha is less than 13 degrees, dDownfactX is QA' 0.5164-0.7303 is d/cos (13-alpha) 0.5164-0.7303
In the formula, dMidfactX is the actual length of the horizontal central axis of the image, dUpfactX is the actual length of the upper edge of the image, dDownfactX is the actual length of the lower edge of the image, d is the shooting object distance, and α is the lens shooting angle.
In step S6: with the actual imaging length of the horizontal central axis of the image and the corresponding image pixel number as reference, the height from the midpoint of the upper and lower edge lines of the image to the central point of the image and the actual imaging length of the upper and lower edge lines of the image calculated in the steps S4 and S5 are converted into the pixel number, and the formula is as follows:
dUpPixelY=B'O'*dimg/dMidfactX+dimg/2=[d*tan(α+13°)–d*tanα]*dimg/dMidfactX+dimg/2
dDownPixelY=A'O'*dimg/dMidfactX–dimg/2=[d*tanα–d*tan(α-13°)]*dimg/dMidfactX–dimgor dDownPixelY [ d tan α -d tan (13 ° - α) ]]*dimg/dMidfactX–dimg/2
dUpPixelX=dUpfactX*dimg/dMidfactX
dDownPixelX=dDownfactX*dimg/dMidfactX
In the formula, dUpPixelY is the pixel number corresponding to the height from the midpoint of the upper edge line of the image to the center point of the image, dDown PixelY is the pixel number corresponding to the height from the midpoint of the lower edge line of the image to the center point of the image, dUpPixelX is the pixel number corresponding to the actual imaging length of the upper edge line of the image, and dDown PixelX is the pixel number corresponding to the actual imaging length of the lower edge line of the image.
The specific operation of step S7 is: automatically acquiring the coordinates of four corner points of the image, the midpoints of four edge lines and 9 groups of correction pixels corresponding to the central point of the image according to the pixel number obtained by conversion in the step S6; and then, correcting the distorted image by utilizing the polynomial model according to the acquired control point.
According to the technical scheme, under the condition that actual control point coordinates do not exist, 9 groups of control point coordinates in the image are calculated and obtained by utilizing the shooting object distance and the lens shooting angle, and finally, the image is corrected by utilizing a polynomial, so that the distorted image can be quickly and conveniently recovered.
Drawings
FIG. 1 is a flowchart illustrating an image correction method according to the present invention;
FIG. 2 is an imaging diagram showing the presence of a lens shooting angle (α ≧ 13 ° or α <13 °) in the embodiment;
FIG. 3 is a schematic diagram of the distribution of control points extracted in the example;
fig. 4 shows a distorted image (left image) and a corrected image (right image) taken in the example.
Detailed Description
In order that the objects and advantages of the invention will be more clearly understood, the following description is given in conjunction with the accompanying examples. It is to be understood that the following text is merely illustrative of one or more specific embodiments of the invention and does not strictly limit the scope of the invention as specifically claimed.
The invention provides a spectral image correction method for automatically acquiring a control point based on two factors of a shooting object distance and a lens shooting angle, which aims to solve the problem of correcting trapezoidal distortion caused by a shooting elevation angle under the condition that the control point cannot be acquired. The embodiment mode is shown in the attached figure 1, which is as follows:
step S1: acquiring spectral images with different object distances, no shooting angles and shooting angles;
making a grid target paper with the size of 1mm multiplied by 1mm, printing by A1 paper, and flatly sticking the paper on a wall;
installing an imaging spectrometer to enable the imaging spectrometer to be horizontal to the ground;
binding a total station distance meter on an imaging spectrometer for measuring a shooting object distance d and a lens shooting angle alpha;
keeping a lens of the imaging spectrometer parallel to the ground, and shooting and scanning the calibration target paper at different object distances to obtain images shot at different object distances;
shooting and scanning the square target paper, keeping horizontal shooting, and taking different object distances diShooting and scanning to obtain different spectrum images t1、t2、t3……tn;
At different object distances diDifferent lens shooting angles αiShooting and scanning to obtain different spectrum images t11、t22、t33……tnn;
Step S2: the relationship between the actual imaging size and the object distance under different object distances;
respective loading of images t in ArcGIS1、t2、t3……tnPositioned to the center point (the imaging area of the imaging spectrometer used in the present invention is a square area), i.e. O (0.5 x d)img,0.5*dimg);
Making a horizontal line passing through the center point Ol, measuring horizontal lines l of different spectral imagesiActual length of (dfactlen)i;
Dfactlen for constructing different spectral images by using ExceliAnd diAnd (3) performing linear trend fitting to obtain a functional relation between the shooting distance and the actual imaging size:
fun(di,dfactleni),dfactleni=di*0.5164-0.7303,R2=1;
step S3: calculating the distances from the lens to the center point of the image, the upper edge line and the midpoint of the lower edge line;
in this embodiment, an image t with a good quality and a shooting angle is selectednnFig. 2 is a schematic view of imaging in the presence of a shooting angle.
When no shooting angle exists, AB is an imaging area, wherein the angle AQO is equal to the angle BQO is equal to 13 degrees, the shot object distance QO is equal to d, and QO, QA and QB are distances from the lens to an image center point, a lower edge and an upper edge center point respectively;
when a shooting angle exists, the lens shooting angle ═ OQO ' ═ α, angle a ' QO ' ═ B ' QO ' ═ 13 °, and the actual imaging region is a ' B ', so:
the calculation formulas of the distance QO ' from the lens to the image center point, the distance QB ' from the lens to the midpoint of the upper edge line of the image and the distance QA ' from the lens to the midpoint of the lower edge line of the image are respectively as follows:
QO'=QO/cosα=d/cosα;
QB'=QO/cos(α+13°)=d/cos(α+13°);
when α >13 °, QA' ═ QO/cos (α -13 °), d/cos (α -13 °);
when α <13 °, QA ═ QO/cos (13 ° - α) ═ d/cos (13 ° - α);
step S4: calculating the actual height from the midpoint of the upper edge line and the lower edge line of the image to the center point of the image;
(1) the actual height from the midpoint of the upper edge line of the image to the center point of the image is:
B'O'=B'O–OO'=QO*tan(α+13°)–QO*tanα=d*tan(α+13°)–d*tanα;
(2) the actual height from the midpoint of the lower edge line of the image to the center point of the image is:
when α >13 °, a ' O ' -a ' O ═ QO ═ tan α -QO ═ tan (α -13 °), d ═ tan α -d ═ tan (α -13 °);
when α is <13 °, a ' O ' -a ' O ═ QO ═ tan α -QO ═ tan (13 ° - α) ═ tan α -d ═ tan (13 ° - α);
step S5: calculating the actual imaging lengths of an upper edge line, a lower edge line and a horizontal central axis of the image;
the relationship fun (d) between the object distance and the actual imaging size established in step S2i,dfactleni) Calculating the actual length of the upper edge line, the lower edge line and the horizontal central axis of the image respectively, and then:
(1) actual length of horizontal central axis of image:
dMidfactX=QO'*0.5164–0.7303=d/cosα*0.5164–0.7303;
(2) actual length of border line on image:
dUpfactX=QB'*0.5164–0.7303=d/cos(α+13°)*0.5164–0.7303;
(3) actual length of the lower edge of the image:
when α >13 °, ddwnfactx ═ QA' × 0.5164-0.7303 ═ d/cos (α -13 °) 0.5164-0.7303;
when alpha is less than 13 degrees, dDownfactX is QA' 0.5164-0.7303 is d/cos (13-alpha) 0.5164-0.7303;
step S6: converting the actual height and length calculated in the steps S4 and S5 into the number of pixels;
the horizontal straight line passing through the lens optical center point is not distorted, so that the height from the middle point of the upper and lower edge lines of the image to the image center point and the actual imaging length of the upper and lower edge lines of the image, which are calculated in the steps S14 and S15 (in the S14\ S15, 1 is a special writing method or a writing error;
(1) the height from the midpoint of the upper and lower lines of the image to the central point of the image is converted into the number of pixels, which is respectively:
dUpPixelY=B'O'*dimg/dMidfactX+dimg/2=[d*tan(α+13°)–d*tanα]*dimg/dMidfactX+dimg/2;
dDownPixelY=A'O'*dimg/dMidfactX–dimg/2=[d*tanα–d*tan(α-13°)]*dimg/dMidfactX–dimgor dDownPixelY [ d tan α -d tan (13 ° - α) ]]*dimg/dMidfactX–dimg/2;
(2) The actual imaging length of the upper and lower edge lines of the image is converted into the number of pixels, which are respectively:
dUpPixelX=dUpfactX*dimg/dMidfactX;
dDownPixelX=dDownfactX*dimg/dMidfactX;
step S7: acquiring control point pairs and correcting images;
according to the keystone distortion due to the shooting angle (fig. 4, left figure), the specific operation of step S7 is: automatically acquiring the coordinates of four corner points of the image, the midpoints of four edge lines and 9 groups of correction pixels corresponding to the central point of the image according to the pixel number obtained by conversion in the step S6; and then, correcting the distorted image by utilizing the polynomial model according to the acquired control point. The obtained 9 groups of control points are shown in a table 1;
TABLE 1 control point pairs
In table 1, pxmax and pxmin are the maximum and minimum pixel numbers of the original image in the x direction, respectively; pymax and pymin are the maximum and minimum pixel numbers of the original image in the y direction, respectively.
And correcting the image by utilizing a polynomial model according to the acquired 9 correction pixel coordinates, wherein the corrected image is shown in the right picture of figure 4.
Under the condition of no actual control point coordinate, the invention calculates and acquires 9 groups of control point coordinates in the image by using the shooting object distance and the lens shooting angle, and finally corrects the image by using a polynomial, thereby rapidly and conveniently recovering the distorted image and generating remarkable beneficial effect.
Devices, mechanisms, components, and methods of operation not specifically described herein are optional and may be readily adapted by those of ordinary skill in the art to perform the same functions and practice as the present invention. Or the same devices, mechanisms, components and methods of operation selected for use and implementation in accordance with common general knowledge of life.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that those skilled in the art, after learning the present disclosure, can make several equivalent changes and substitutions without departing from the principle of the present invention, and these equivalent changes and substitutions should also be considered as belonging to the protection scope of the present invention.
Claims (7)
1. A spectral image correction method for automatically extracting control points is characterized by comprising the following operation steps:
s1: acquiring spectral images with different object distances, no shooting angles and shooting angles;
s2: establishing a relation between the actual imaging size and the object distance under different object distances;
s3: calculating the distances from the lens to the center point of the image, the upper edge line and the midpoint of the lower edge line;
s4: calculating the actual height from the midpoint of the upper edge line and the lower edge line of the image to the center point of the image;
s5: calculating the actual imaging lengths of an upper edge line, a lower edge line and a horizontal central axis of the image;
s6: converting the actual height and the length calculated in S4 and S5 into the number of pixels;
s7: and controlling the acquisition of the point pairs and the correction of the image.
2. The method for correcting spectral image of claim 1, wherein the step S2 comprises the following steps: according to the shot object distance, establishing a linear relation between the shot object distance and the actual imaging size:
fun(di,dfactleni),dfactleni=di*0.5164-0.7303
where fun () is the linear relationship between the object distance and the actual imaging size, d is the object distance, dfactleniThe actual length of the horizontal line passing through the center of the image.
3. The method for correcting spectral image of claim 1, wherein the step S3 comprises the following steps:
s31: selecting an image with a lens shooting angle and good quality;
s32: calculating the distances from the lens to the center point of the image, the middle points of the upper edge line and the lower edge line according to the shooting object distance and the shooting angle of the lens, wherein the formulas are as follows:
QO'=QO/cosα=d/cosα
QB'=QO/cos(α+13°)=d/cos(α+13°)
when alpha is more than 13 degrees, QA' QO/cos (alpha-13 degrees) d/cos (alpha-13 degrees)
When alpha is less than 13 degrees, QA' QO/cos (13-alpha) d/cos (13-alpha)
In the formula, QO ' is the distance from the lens to the center point of the image, QB ' is the distance from the lens to the midpoint of the upper edge of the image, QA ' is the distance from the lens to the midpoint of the lower edge of the image, d is the object distance to be shot, and alpha is the shooting angle of the lens.
4. The method for correcting spectral image of claim 1, wherein in step S4: the calculation formula of the actual height from the midpoint of the upper edge line and the lower edge line of the image to the center point of the image is as follows:
B'O'=B'O–OO'=QO*tan(α+13°)–QO*tanα=d*tan(α+13°)–d*tanα
when α >13 °, a ' O ' -a ' O ═ QO ═ tan α -QO ═ tan (α -13 °), d ═ tan α -d ═ tan (α -13 °)
When alpha is less than 13 deg., A ' O ' -A ' O-QO tan alpha-QO tan (13-alpha) d tan alpha-d tan (13-alpha)
In the formula, B 'O' is the actual height from the midpoint of the upper edge of the image to the center point of the image, a 'O' is the actual height from the midpoint of the lower edge of the image to the center point of the image, d is the object distance, and α is the lens angle.
5. The method for correcting spectral images of automatic control point extraction according to claim 1, wherein step S5 is specifically performed by:
the relationship fun (d) between the object distance and the actual imaging size established in step S2i,dfactleni) And respectively calculating the actual lengths of the upper edge line, the lower edge line and the horizontal central axis of the image, wherein the formula is as follows:
dMidfactX=QO'*0.5164–0.7303=d/cosα*0.5164–0.7303
dUpfactX=QB'*0.5164–0.7303=d/cos(α+13°)*0.5164–0.7303
when alpha is more than 13 degrees, dDownfactX QA' 0.5164-0.7303 d/cos (alpha-13 degrees) 0.5164-0.7303
When alpha is less than 13 degrees, dDownfactX is QA' 0.5164-0.7303 is d/cos (13-alpha) 0.5164-0.7303
In the formula, dMidfactX is the actual length of the horizontal central axis of the image, dUpfactX is the actual length of the upper edge of the image, dDownfactX is the actual length of the lower edge of the image, d is the shooting object distance, and α is the lens shooting angle.
6. The method for correcting spectral image of claim 1, wherein in step S6: with the actual imaging length of the horizontal central axis of the image and the corresponding image pixel number as reference, the height from the midpoint of the upper and lower edge lines of the image to the central point of the image and the actual imaging length of the upper and lower edge lines of the image calculated in the steps S4 and S5 are converted into the pixel number, and the formula is as follows:
dUpPixelY=B'O'*dimg/dMidfactX+dimg/2=[d*tan(α+13°)–d*tanα]*dimg/dMidfactX+dimg/2
dDownPixelY=A'O'*dimg/dMidfactX–dimg/2=[d*tanα–d*tan(α-13°)]*dimg/dMidfactX–dimgor dDownPixelY [ d tan α -d tan (13 ° - α) ]]*dimg/dMidfactX–dimg/2
dUpPixelX=dUpfactX*dimg/dMidfactX
dDownPixelX=dDownfactX*dimg/dMidfactX
In the formula, dUpPixelY is the pixel number corresponding to the height from the midpoint of the upper edge line of the image to the center point of the image, dDown PixelY is the pixel number corresponding to the height from the midpoint of the lower edge line of the image to the center point of the image, dUpPixelX is the pixel number corresponding to the actual imaging length of the upper edge line of the image, and dDown PixelX is the pixel number corresponding to the actual imaging length of the lower edge line of the image.
7. The method for correcting spectral image of claim 1, wherein the step S7 comprises the following steps: automatically acquiring the coordinates of four corner points of the image, the midpoints of four edge lines and 9 groups of correction pixels corresponding to the central point of the image according to the pixel number obtained by conversion in the step S6; and then, correcting the distorted image by utilizing the polynomial model according to the acquired control point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710235471.4A CN107133925B (en) | 2017-04-12 | 2017-04-12 | Spectrum image correction method for automatically extracting control points |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710235471.4A CN107133925B (en) | 2017-04-12 | 2017-04-12 | Spectrum image correction method for automatically extracting control points |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107133925A CN107133925A (en) | 2017-09-05 |
CN107133925B true CN107133925B (en) | 2020-09-08 |
Family
ID=59715616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710235471.4A Expired - Fee Related CN107133925B (en) | 2017-04-12 | 2017-04-12 | Spectrum image correction method for automatically extracting control points |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107133925B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110544206A (en) * | 2019-08-29 | 2019-12-06 | 济南神博信息技术有限公司 | Image splicing system and image splicing method |
CN110533624A (en) * | 2019-09-11 | 2019-12-03 | 神博(山东)安防科技有限公司 | A kind of laser ranging auxiliary splicing system and image split-joint method |
CN115334245A (en) * | 2019-12-06 | 2022-11-11 | 达闼机器人股份有限公司 | Image correction method and device, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103900532A (en) * | 2012-12-27 | 2014-07-02 | 财团法人工业技术研究院 | Depth image capturing device, and calibration method and measurement method thereof |
TWI557393B (en) * | 2015-10-08 | 2016-11-11 | 微星科技股份有限公司 | Calibration method of laser ranging and device utilizing the method |
-
2017
- 2017-04-12 CN CN201710235471.4A patent/CN107133925B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103900532A (en) * | 2012-12-27 | 2014-07-02 | 财团法人工业技术研究院 | Depth image capturing device, and calibration method and measurement method thereof |
TWI557393B (en) * | 2015-10-08 | 2016-11-11 | 微星科技股份有限公司 | Calibration method of laser ranging and device utilizing the method |
Non-Patent Citations (2)
Title |
---|
两步投影法校正图像采集中的梯形畸变;文小军 等;《怀化学院学报》;20070831;第26卷(第8期);第40-44页 * |
基于POS的机载高光谱影像几何校正;刘军 等;《第15届全国遥感技术学术交流会》;20050823;第61-62页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107133925A (en) | 2017-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109146794B (en) | A kind of light field image rotation error bearing calibration | |
CN109598762A (en) | A kind of high-precision binocular camera scaling method | |
CN108288294A (en) | A kind of outer ginseng scaling method of a 3D phases group of planes | |
CN107133925B (en) | Spectrum image correction method for automatically extracting control points | |
CN107255521B (en) | A kind of Infrared Image Non-uniformity Correction method and system | |
CN105716542B (en) | A kind of three-dimensional data joining method based on flexible characteristic point | |
CN105957041B (en) | A kind of wide-angle lens infrared image distortion correction method | |
CN107886547B (en) | Fisheye camera calibration method and system | |
CN106600648A (en) | Stereo coding target for calibrating internal parameter and distortion coefficient of camera and calibration method thereof | |
CN110033407B (en) | Shield tunnel surface image calibration method, splicing method and splicing system | |
CN107462182B (en) | A kind of cross section profile deformation detecting method based on machine vision and red line laser | |
CN108010086A (en) | Camera marking method, device and medium based on tennis court markings intersection point | |
CN110099267A (en) | Trapezoidal correcting system, method and projector | |
CN106023193B (en) | A kind of array camera observation procedure detected for body structure surface in turbid media | |
CN104268853A (en) | Infrared image and visible image registering method | |
CN113610060B (en) | Structure crack sub-pixel detection method | |
CN108257187B (en) | Camera-projector system calibration method | |
CN111383194A (en) | Camera distortion image correction method based on polar coordinates | |
CN106778510B (en) | Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image | |
CN104318583A (en) | Visible light broadband spectrum image registration method | |
CN111105466A (en) | Calibration method of camera in CT system | |
CN107576286B (en) | A kind of spatial position of target global optimization and posture solution seek method | |
CN113012234A (en) | High-precision camera calibration method based on plane transformation | |
CN116740187A (en) | Multi-camera combined calibration method without overlapping view fields | |
CN110070582A (en) | Take the photograph mould group parameter self-calibration system and calibration method and its electronic equipment more |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200908 |