CN107492069B - Image fusion method based on multi-lens sensor - Google Patents

Image fusion method based on multi-lens sensor Download PDF

Info

Publication number
CN107492069B
CN107492069B CN201710531032.8A CN201710531032A CN107492069B CN 107492069 B CN107492069 B CN 107492069B CN 201710531032 A CN201710531032 A CN 201710531032A CN 107492069 B CN107492069 B CN 107492069B
Authority
CN
China
Prior art keywords
image
images
coordinate system
points
xyz
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710531032.8A
Other languages
Chinese (zh)
Other versions
CN107492069A (en
Inventor
顾天雄
黄晓明
曹炯
江炯
汪从敏
何玉涛
张平
程国开
张建
黎天翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Ningbo Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201710531032.8A priority Critical patent/CN107492069B/en
Publication of CN107492069A publication Critical patent/CN107492069A/en
Application granted granted Critical
Publication of CN107492069B publication Critical patent/CN107492069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image fusion method based on a multi-lens sensor, which belongs to the field of image synthesis and comprises the following steps: acquiring a plurality of images based on a multi-lens sensor; according to the relative position relation of the multi-lens sensor, coordinate parameters are constructed, and the image points on the target object in the multiple images are converted from a plane space to a three-dimensional space coordinate to obtain a converted image according to the coordinate parameters; extracting contour information of a target object according to a classical collinear equation, orientation elements in the multi-lens sensor and attribute values of the characteristic points; and splicing the plurality of images according to the contour information to obtain a spliced image. The five images can be spliced into one image through the processing, and a strict spatial mathematical model of the multi-lens camera can be established based on the relative spatial relationship of different regions in the target object obtained from the obtained image. The user can carry out the unmanned aerial vehicle of transmission line and patrol the line fast based on this technique, fuses the image that many camera lenses were gathered to establish real geographical environment scene.

Description

Image fusion method based on multi-lens sensor
Technical Field
The invention belongs to the field of image synthesis, and particularly relates to an image fusion method based on a multi-lens sensor.
Background
At present, an unmanned aerial vehicle system has the capacity of working aloft, at a long distance, quickly and automatically, and is widely used for quick line patrol operation of a power grid line. However, most of the existing unmanned aerial vehicle systems are single common cameras, and only one picture at a single moment can be obtained when the unmanned aerial vehicle is used for shooting along the tower in the power grid line.
In order to acquire image data of different angles and details of a tower, the unmanned aerial vehicle is required to be used for shooting for multiple times at different heights, but the shooting for multiple times for the same target is difficult due to conditions such as terrain and weather and the limit of the cruising ability of the unmanned aerial vehicle, so that the working efficiency of line patrol operation is reduced.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides an image fusion method for synthesizing a plurality of images so as to improve the working efficiency of line patrol operation.
In order to achieve the above technical object, the present invention provides an image fusion method based on a multi-lens sensor including at least four tilt sensors whose photographing central axes are fixed on a photographing plane at the same angle in an inclined manner, and a vertical sensor located at the center of the tilt sensors and having the photographing central axes perpendicular to the photographing plane, the image fusion method including:
shooting a target object based on a multi-lens sensor to acquire a plurality of images with the same number as the multi-lens sensor;
according to the relative position relation of the multi-lens sensor, a coordinate parameter for converting image information in the multiple images into space information is constructed, and the conversion of image points on a target object in the multiple images from a plane space to a three-dimensional space coordinate is completed according to the coordinate parameter to obtain a converted image;
extracting homonymous feature points of a plurality of images from the converted images, and extracting contour information of the target object according to a classical collinear equation, an internal orientation element of the multi-lens sensor and attribute values of the feature points;
and splicing the plurality of images according to the contour information to obtain a spliced image relative to the target object.
Optionally, the image fusion method includes:
a reference coordinate system S-XYZ, an image space coordinate system S1-XYZ and an image space auxiliary coordinate system S1-XYZ corresponding to the acquired plurality of images are constructed, and a scaling factor lambda exists between the image space coordinate system S1-XYZ and the image space auxiliary coordinate system S1-XYZ.
Optionally, the constructing, according to the relative position relationship of the multiple-lens sensor, a coordinate parameter for converting image information in the multiple images into spatial information includes:
step one, selecting any point A on a target object, wherein the coordinate of the point A in a reference coordinate system S-XYZ is (X, Y, Z);
secondly, let the coordinate system of the point A in the image space coordinate system S1-xyz be a10(x1、y1And-f), the coordinates of the point A in the auxiliary coordinate system S1-XYZ of the image space are (x)10、y10、-f),;
Step three, processing all pixel points on the target object according to the steps from the step one to the step two, and acquiring a converted image formed by coordinate values in an image space auxiliary coordinate system S1-XYZ after the processing;
wherein, the expressions sequentially converted from the reference coordinate system to the image space coordinate system S1-XYZ and the image space auxiliary coordinate system S1-XYZ are shown in formula (1)
Figure DEST_PATH_GDA0001344454910000021
Optionally, the extracting the feature points of the same name of the multiple images from the converted image and automatically extracting the profile information of the target object according to a classical collinear equation, the orientation elements in the multi-lens sensor, and the attribute values of the feature points include:
scanning the converted image, determining a color contrast pixel threshold value matched with the converted image according to a scanning result, and extracting characteristic points with pixel values higher than the color contrast pixel threshold value from the converted image;
extracting an attribute value of each feature point, determining the region of the feature point on the target object according to the attribute value, and giving weights to different regions according to a preset parameter set;
performing plane scanning segmentation on each region according to the weight, determining the side structure of each region, and completing reconstruction;
and planning a boundary between the target object and the external environment according to the reconstruction result, and extracting the contour information of the target object.
Optionally, the stitching the plurality of images according to the contour information to obtain a stitched image with respect to the target object includes:
selecting two adjacent images in the multiple images, selecting characteristic information from the two selected images, extracting homonymous image points from the characteristic information, and combining accurate internal orientation information of the multi-lens sensor and absolute orientation through adjustment of the control point integral area network to obtain high-precision absolute attitude information and orientation elements of the multi-lens sensor;
extracting connecting points between equivalent horizontally spliced images and orientation points in a single flight zone;
acquiring images in a single flight zone, eliminating gross errors through automatic matching of image points with the same name and adjustment by a free net beam method, and constructing a single flight zone area network based on a relative directional connection mode;
extracting full-automatic connection points of all image overlapping areas between the flight zones, carrying out adjustment processing by a free network beam method between the flight zones, and after rough differences are eliminated, carrying out relative directional connection to form a full flight zone area network;
and (3) obtaining the accurate exterior orientation elements of the horizontally spliced images in absolute orientation after the adjustment of the whole area network beam method and the elimination of the gross error through measuring the image point coordinates of the control points on the corresponding image pairs, and completing image splicing.
The technical scheme provided by the invention has the beneficial effects that:
the five images can be spliced into one image through the processing, and a strict spatial mathematical model of the multi-lens camera can be established based on the relative spatial relationship of different regions in the target object obtained from the obtained image. The user can carry out the unmanned aerial vehicle of transmission line and patrol the line fast based on this technique, fuses the image that many camera lenses were gathered to establish real geographical environment scene.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image fusion method based on a multi-lens sensor according to the present invention;
fig. 2 is a schematic structural diagram of a multi-lens sensor provided by the present invention.
Detailed Description
To make the structure and advantages of the present invention clearer, the structure of the present invention will be further described with reference to the accompanying drawings.
Example one
The invention provides an image fusion method based on a multi-lens sensor, wherein the multi-lens sensor comprises at least four inclined sensors and a vertical sensor, the central axes of the inclined sensors are obliquely fixed on a shooting plane at the same angle, the vertical sensor is positioned in the center of the inclined sensors, and the central axes of the shooting are vertical to the shooting plane, as shown in figure 1, the image fusion method comprises the following steps:
11. shooting a target object based on a multi-lens sensor to acquire a plurality of images with the same number as the multi-lens sensor;
12. according to the relative position relation of the multi-lens sensor, a coordinate parameter for converting image information in the multiple images into space information is constructed, and the conversion of image points on a target object in the multiple images from a plane space to a three-dimensional space coordinate is completed according to the coordinate parameter to obtain a converted image;
13. extracting homonymous feature points of a plurality of images from the converted images, and extracting contour information of the target object according to a classical collinear equation, an internal orientation element of the multi-lens sensor and attribute values of the feature points;
14. and splicing the plurality of images according to the contour information to obtain a spliced image relative to the target object.
In practice, the multi-lens sensor used in the embodiment of the present invention, as shown in fig. 2, includes a vertical camera 1, a tilt camera 2, a tilt camera 3, a tilt camera 4, and a tilt camera 5, wherein the tilt cameras 2, 3, 4, and 5 are respectively tilted inward by a fixed angle. The 5 cameras all adopt 2010 ten thousand pixels APS-C picture digital cameras, and the lens adopts a good fortune wheel to reach a manual focusing lens. The vertical camera 1 is a 25mmF2.5 fixed focus lens, and the oblique cameras 2, 3, 4 and 5 are 35mmF2.5 fixed focus lenses.
At the same time, the target object is shot based on the multi-lens sensor to obtain five images, the five images can be spliced into one image through the processing in the steps 11 to 14, the relative spatial relationship of different areas in the target object can be obtained based on the obtained images, the fusion of multi-baseline images is realized, the spatial elements of the multi-baseline lenses are accurately calculated, and a rigorous spatial mathematical model of the multi-lens camera is established. The user can carry out the unmanned aerial vehicle of transmission line and patrol the line fast based on this technique, fuses the image that many camera lenses were gathered to establish real geographical environment scene.
Optionally, the image fusion method includes:
a reference coordinate system S-XYZ, an image space coordinate system S1-XYZ and an image space auxiliary coordinate system S1-XYZ corresponding to the acquired plurality of images are constructed, and a scaling factor lambda exists between the image space coordinate system S1-XYZ and the image space auxiliary coordinate system S1-XYZ.
In implementation, in order to stitch images obtained by a multi-lens sensor, it is necessary to map elements in each image into space, and then stitch each image based on the relative position relationship of each element on the target object, so as to obtain a stitched image. In order to realize the above process, projection operation of coordinates in different spaces needs to be performed for a plurality of times, and for the purpose of calculation convenience, a plurality of coordinate systems are constructed, including a reference coordinate system S-XYZ, an image space coordinate system S1-XYZ corresponding to the acquired plurality of images, and an image space auxiliary coordinate system S1-XYZ, and a scaling factor λ exists between the image space coordinate system S1-XYZ and the image space auxiliary coordinate system S1-XYZ.
Optionally, the constructing, according to the relative position relationship of the multiple-lens sensor, a coordinate parameter for converting image information in the multiple images into spatial information includes:
step one, selecting any point A on a target object, wherein the coordinate of the point A in a reference coordinate system S-XYZ is (X, Y, Z);
secondly, let the coordinate system of the point A in the image space coordinate system S1-xyz be a10(x1、y1And-f), the coordinates of the point A in the auxiliary coordinate system S1-XYZ of the image space are (x)10、y10、-f),;
Step three, processing all pixel points on the target object according to the steps from the step one to the step two, and acquiring a converted image formed by coordinate values in an image space auxiliary coordinate system S1-XYZ after the processing;
wherein, the expressions sequentially converted from the reference coordinate system to the image space coordinate system S1-XYZ and the image space auxiliary coordinate system S1-XYZ are shown in formula (1)
Figure DEST_PATH_GDA0001344454910000051
In the implementation, based on a plurality of coordinate systems constructed in the previous step, the conversion of elements in the image is performed, and the specific steps are as follows:
an exterior orientation element recorder is provided (three line elements x, y, z and three corner elements
Figure DEST_PATH_GDA0001344454910000053
ω, κ). The exterior orientation element is used for determining the space position and the posture parameter of the sensor at the moment of acquiring the photo. The coordinates of the centers of the four camera stations are respectively (X)i、Yi、Zi),
(X ') relative to the center coordinate of the horizontally spliced image'i、Y′i、Z′i) X'i=Xi;Y′i=Yi;Z′i=ZiThe initial exterior orientation elements of the equivalent horizontal mosaic image are as follows: (X, Y, Z,
Figure DEST_PATH_GDA0001344454910000054
ω, κ) ═ 0, 0, 0, 0, 0), where
Figure DEST_PATH_GDA0001344454910000055
The value range of i is a natural number.
Taking the first image as an example, the center of the camera station is S1(X1、Y1,Z1) The reference coordinate system is S-XYZ, and the image space coordinate system is S1-xyz, image space auxiliary coordinate system S1-XYZ,S1-xyz to S1The rotation matrix of XYZ is R, and the image point of the arbitrary point A (X, Y, Z) on the oblique image is a10The image point on the equivalent horizontal image is x10
Then a10The coordinate in the image space coordinate system is (x)1、y1、-f),a10The coordinate in the auxiliary coordinate system of the image space is (x)10、y10-f), λ is the image space auxiliary coordinate system S1-XYZ to reference coordinate system S-XYZ scaling factor.
Figure DEST_PATH_GDA0001344454910000052
After the coordinate mapping process, the precision of the obtained result needs to be adjusted, as follows:
and (3) utilizing the homonymy points to perform adjustment relative orientation through a regional network beam method, and then obtaining high-precision attitude determination and orientation elements through the adjustment absolute orientation of the whole regional network of the control points.
The multi-view image joint adjustment needs to fully consider the geometric deformation and the shielding relation among the images.
And combining the exterior orientation elements of the multi-view images provided by the ground control point system, and performing automatic homonymy point matching and free net beam method adjustment on each level of images by adopting a pyramid matching strategy from coarse to fine to obtain a better homonymy point matching result.
And simultaneously establishing an error equation of the adjustment of the multi-video self-checking area network of the connection points, the connection lines, the control point coordinates and the GPS/IMU auxiliary data, and ensuring the precision of the adjustment result through joint calculation.
Optionally, the extracting the feature points of the same name of the multiple images from the converted image and automatically extracting the profile information of the target object according to a classical collinear equation, the orientation elements in the multi-lens sensor, and the attribute values of the feature points include:
scanning the converted image, determining a color contrast pixel threshold value matched with the converted image according to a scanning result, and extracting characteristic points with pixel values higher than the color contrast pixel threshold value from the converted image;
extracting an attribute value of each feature point, determining the region of the feature point on the target object according to the attribute value, and giving weights to different regions according to a preset parameter set;
performing plane scanning segmentation on each region according to the weight, determining the side structure of each region, and completing reconstruction;
and planning a boundary between the target object and the external environment according to the reconstruction result, and extracting the contour information of the target object.
In implementation, the matching of the multi-view images is realized by utilizing the computer vision technology.
The method comprises the steps of determining two-dimensional characteristics of different visual angles on a two-dimensional vector data set image of a building by searching characteristics such as building edges, wall surface edges and textures on a multi-view image, converting the two-dimensional characteristics into three-dimensional characteristics, setting a plurality of influence factors and giving a certain weight when determining the wall surface, dividing the wall surface into different classes, performing plane scanning and segmentation on each wall surface of the building to obtain a side surface structure of the building, reconstructing the side surface, and extracting the height and the outline of a roof of the building.
Optionally, the stitching the plurality of images according to the contour information to obtain a stitched image with respect to the target object includes:
selecting two adjacent images in the multiple images, selecting characteristic information from the two selected images, extracting homonymous image points from the characteristic information, and combining accurate internal orientation information of the multi-lens sensor and absolute orientation through adjustment of the control point integral area network to obtain high-precision absolute attitude information and orientation elements of the multi-lens sensor;
extracting connecting points between equivalent horizontally spliced images and orientation points in a single flight zone;
acquiring images in a single flight zone, eliminating gross errors through automatic matching of image points with the same name and adjustment by a free net beam method, and constructing a single flight zone area network based on a relative directional connection mode;
extracting full-automatic connection points of all image overlapping areas between the flight zones, carrying out adjustment processing by a free network beam method between the flight zones, and after rough differences are eliminated, carrying out relative directional connection to form a full flight zone area network;
and (3) obtaining the accurate exterior orientation elements of the horizontally spliced images in absolute orientation after the adjustment of the whole area network beam method and the elimination of the gross error through measuring the image point coordinates of the control points on the corresponding image pairs, and completing image splicing.
In the implementation, the color imbalance among the multi-baseline images is mainly influenced by the camera shooting angle, the camera parameter setting, the cloud layer and the shadow when the images are obtained during shooting,
the generation of the spliced image of the combined camera comprises the generation of an image mosaic line, dodging processing and image splicing. The method comprises the following steps:
a) semi-automatically extracting connection points between equivalent horizontally spliced images and orientation points in a single flight zone;
b) adjusting the single-flight-zone free net beam method, eliminating gross errors, and connecting the initial connecting component net in a relative orientation manner;
c) extracting full-automatic connection points between the flight belts;
d) after adjusting difference between the flight zones by a free net beam method and eliminating rough difference, the flight zones are relatively directionally connected to form a regional net;
e) and (3) obtaining the accurate exterior orientation element of the horizontally spliced image in absolute orientation after the adjustment of the whole area network beam method and the elimination of the gross error through measuring the image point coordinates of the control points on the corresponding image pairs.
The invention provides an image fusion method based on a multi-lens sensor, which comprises the following steps: acquiring a plurality of images based on a multi-lens sensor; according to the relative position relation of the multi-lens sensor, coordinate parameters are constructed, and the image points on the target object in the multiple images are converted from a plane space to a three-dimensional space coordinate to obtain a converted image according to the coordinate parameters; extracting contour information of a target object according to a classical collinear equation, orientation elements in the multi-lens sensor and attribute values of the characteristic points; and splicing the plurality of images according to the contour information to obtain a spliced image. The five images can be spliced into one image through the processing, and a strict spatial mathematical model of the multi-lens camera can be established based on the relative spatial relationship of different regions in the target object obtained from the obtained image. The user can carry out the unmanned aerial vehicle of transmission line and patrol the line fast based on this technique, fuses the image that many camera lenses were gathered to establish real geographical environment scene.
According to the embodiment, a medium-sized fixed wing unmanned aerial vehicle is adopted to carry a special self-stabilizing tripod head with an all-carbon fiber structure, the lateral direction overlapping degree of flight is 70%, the course overlapping degree is 80%, the flight height is 370 m, and the image resolution is 5 cm. During image acquisition, the camera is automatically controlled to photograph at the same time, 5 cameras are started up at the same time, and acquired data are read and output at the same time.
The multiple sensors are carried on the unmanned aerial vehicle, images are acquired from five different angles such as one vertical angle, four inclined angles and the like, fusion of multiple baseline images is achieved, space elements of multiple baseline lenses are accurately calculated, and a tight space mathematical model of the multi-lens camera is established. The user can carry out the unmanned aerial vehicle of transmission line and patrol the line fast based on this technique, fuses the image that many camera lenses were gathered to establish real geographical environment scene.
The sequence numbers in the above embodiments are merely for description, and do not represent the sequence of the assembly or the use of the components.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. The image fusion method based on multiple sensors is characterized in that the image fusion method comprises the following steps of:
shooting a target object based on multiple sensors to obtain multiple images with the same number as the multiple sensors;
according to the relative position relationship of the multiple sensors, coordinate parameters for converting image information in the multiple images into space information are constructed, the image points on the target object in the multiple images are converted from a plane space to a three-dimensional space coordinate according to the coordinate parameters to obtain converted images, and the precision of the obtained results is required to be adjusted;
extracting homonymous feature points of a plurality of images from the converted images, and extracting contour information of the target object according to a classical collinear equation, multi-sensor internal orientation elements and attribute values of the feature points;
splicing the plurality of images according to the contour information to obtain spliced images relative to the target object;
wherein, the precision adjustment of the obtained result is required, which includes:
utilizing homonymy points to carry out adjustment relative orientation through a regional network, and then obtaining high-precision attitude determination and orientation elements through the adjustment absolute orientation of the whole regional network of the control points;
the multi-view image joint adjustment needs to fully consider the geometric deformation and the shielding relation among the images;
combining with the exterior orientation elements of the multi-view images provided by a ground control point system, adopting a pyramid matching strategy from coarse to fine to perform homonymy point automatic matching and free net beam method adjustment on each level of images to obtain a better homonymy point matching result;
and simultaneously establishing an error equation of the adjustment of the multi-video self-checking area network of the connection points, the connection lines, the control point coordinates and the GPS/IMU auxiliary data, and ensuring the precision of the adjustment result through joint calculation.
2. The multi-sensor based image fusion method of claim 1, comprising:
a reference coordinate system S-XYZ, an image space coordinate system S1-XYZ and an image space auxiliary coordinate system S1-XYZ corresponding to the acquired plurality of images are constructed, and a scaling factor lambda exists between the image space coordinate system S1-XYZ and the image space auxiliary coordinate system S1-XYZ.
3. The multi-sensor-based image fusion method according to claim 2, wherein constructing coordinate parameters for converting image information in a plurality of images into spatial information according to relative position relationship of the multi-sensors comprises:
step one, selecting any point A on a target object, wherein the coordinate of the point A in a reference coordinate system S-XYZ is (X, Y, Z);
step two, let the coordinate of the point A in the image space coordinate system S1-xyz be a10(x1、y1、-f1) Point A is atThe coordinates in the image space auxiliary coordinate system S1-XYZ are (x)10、y10、-f10);
Step three, processing all pixel points on the target object according to the steps from the step one to the step two, and acquiring a converted image formed by coordinate values in an image space auxiliary coordinate system S1-XYZ after the processing;
wherein, the expressions sequentially converted from the reference coordinate system to the image space coordinate system S1-XYZ and the image space auxiliary coordinate system S1-XYZ are shown in formula (1)
Figure FDA0002676697460000021
Wherein λ is1For a scaling factor, λ, in the transformation of the reference coordinate system into the image space coordinate system S1-xyz2The scaling factor is used for converting the reference coordinate system into the image space auxiliary coordinate system S1-XYZ.
4. The multi-sensor based image fusion method of claim 1, wherein the extracting of the homonymous feature points of the plurality of videos from the converted images and the automatically extracting of the contour information of the target object according to the classical collinearity equation, the multi-sensor internal orientation elements and the attribute values of the feature points comprises:
scanning the converted image, determining a color contrast pixel threshold value matched with the converted image according to a scanning result, and extracting characteristic points with pixel values higher than the color contrast pixel threshold value from the converted image;
extracting an attribute value of each feature point, determining the region of the feature point on the target object according to the attribute value, and giving weights to different regions according to a preset parameter set;
performing plane scanning segmentation on each region according to the weight, determining a side structure of each region, and completing reconstruction;
and planning a boundary between the target object and the external environment according to the reconstruction result, and extracting the contour information of the target object.
5. The multi-sensor based image fusion method of claim 1, wherein the stitching the plurality of images according to the contour information to obtain a stitched image with respect to the target object comprises:
selecting two adjacent images in the multiple images, selecting characteristic information from the two selected images, extracting image points with the same name from the characteristic information, and obtaining high-precision absolute attitude information and orientation elements of the multiple sensors by combining accurate internal orientation information of the multiple sensors and controlling the integral regional net adjustment absolute orientation of the points;
extracting connecting points between equivalent horizontally spliced images and orientation points in a single flight zone;
acquiring images in a single flight zone, eliminating gross errors through automatic matching of image points with the same name and adjustment by a free net beam method, and constructing a single flight zone area network based on a relative directional connection mode;
extracting full-automatic connection points of all image overlapping areas between the flight zones, carrying out adjustment processing by a free network beam method between the flight zones, and after rough differences are eliminated, carrying out relative directional connection to form a full flight zone area network;
and (3) obtaining the accurate exterior orientation elements of the horizontally spliced images in absolute orientation after the adjustment of the whole area network beam method and the elimination of the gross error through measuring the image point coordinates of the control points on the corresponding image pairs, and completing image splicing.
CN201710531032.8A 2017-07-01 2017-07-01 Image fusion method based on multi-lens sensor Active CN107492069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710531032.8A CN107492069B (en) 2017-07-01 2017-07-01 Image fusion method based on multi-lens sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710531032.8A CN107492069B (en) 2017-07-01 2017-07-01 Image fusion method based on multi-lens sensor

Publications (2)

Publication Number Publication Date
CN107492069A CN107492069A (en) 2017-12-19
CN107492069B true CN107492069B (en) 2021-01-26

Family

ID=60644183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710531032.8A Active CN107492069B (en) 2017-07-01 2017-07-01 Image fusion method based on multi-lens sensor

Country Status (1)

Country Link
CN (1) CN107492069B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT520781A2 (en) * 2017-12-22 2019-07-15 Avl List Gmbh Behavior model of an environmental sensor
CN108761271A (en) * 2018-03-30 2018-11-06 广州中科云图智能科技有限公司 A kind of power grid screen of trees detection method and system
CN108731686B (en) * 2018-05-30 2019-06-14 淮阴工学院 A kind of Navigation of Pilotless Aircraft control method and system based on big data analysis
CN109671109B (en) * 2018-12-25 2021-05-07 中国人民解放军61540部队 Dense point cloud generation method and system
CN109827548A (en) * 2019-02-28 2019-05-31 华南机械制造有限公司 The processing method of aerial survey of unmanned aerial vehicle data
CN110595442A (en) * 2019-08-13 2019-12-20 中国南方电网有限责任公司超高压输电公司昆明局 Transmission line channel tree obstacle detection method, storage medium and computer equipment
CN111105488B (en) * 2019-12-20 2023-09-08 成都纵横自动化技术股份有限公司 Imaging simulation method, imaging simulation device, electronic equipment and storage medium
CN111707668B (en) * 2020-05-28 2023-11-17 武汉光谷卓越科技股份有限公司 Tunnel detection and image processing method based on sequence images
CN112597574B (en) * 2020-12-21 2024-04-16 福建汇川物联网技术科技股份有限公司 Construction method and device of building information model
CN113112407B (en) * 2021-06-11 2021-09-03 上海英立视电子有限公司 Method, system, device and medium for generating field of view of television-based mirror
CN113781373B (en) * 2021-08-26 2024-08-23 云从科技集团股份有限公司 Image fusion method, device and computer storage medium
CN114264660A (en) * 2021-12-03 2022-04-01 国网黑龙江省电力有限公司电力科学研究院 Transmission line tower surface defect detection method and device based on green laser imaging

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859433A (en) * 2009-04-10 2010-10-13 日电(中国)有限公司 Image mosaic device and method
CN104766339A (en) * 2015-04-29 2015-07-08 上海电气集团股份有限公司 Cloud cluster automatic detection method of ground-based sky image
CN106504286A (en) * 2016-08-20 2017-03-15 航天恒星科技有限公司 Satellite image localization method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5657073A (en) * 1995-06-01 1997-08-12 Panoramic Viewing Systems, Inc. Seamless multi-camera panoramic imaging with distortion correction and selectable field of view
CN101509784B (en) * 2009-03-20 2012-03-28 西安煤航信息产业有限公司 GPS//INS data direct directing precision assessment method
CN102778224B (en) * 2012-08-08 2014-07-02 北京大学 Method for aerophotogrammetric bundle adjustment based on parameterization of polar coordinates
CN104729532B (en) * 2015-03-02 2018-05-01 山东科技大学 A kind of tight scaling method of panorama camera
CN106643669B (en) * 2016-11-22 2018-10-19 北京空间机电研究所 A kind of more camera lens multi-detector aerial camera single centre projection transform methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859433A (en) * 2009-04-10 2010-10-13 日电(中国)有限公司 Image mosaic device and method
CN104766339A (en) * 2015-04-29 2015-07-08 上海电气集团股份有限公司 Cloud cluster automatic detection method of ground-based sky image
CN106504286A (en) * 2016-08-20 2017-03-15 航天恒星科技有限公司 Satellite image localization method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
全景图拼接中图像融合算法的研究;黄立勤等;《电子与信息学报》;20140630;全文 *
基于物方空间几何约束最小二乘匹配的建筑物半自动提取方法;张祖勋等;《武汉大学学报》;20010831;全文 *
航空倾斜多视影像匹配方法研究;李英杰;《中国优秀硕士学位论文电子期刊》;20150315(第3期);第2-4节 *

Also Published As

Publication number Publication date
CN107492069A (en) 2017-12-19

Similar Documents

Publication Publication Date Title
CN107492069B (en) Image fusion method based on multi-lens sensor
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN109993696B (en) Multi-viewpoint image-based correction and splicing method for structural object surface panoramic image
CN109903227B (en) Panoramic image splicing method based on camera geometric position relation
US5259037A (en) Automated video imagery database generation using photogrammetry
JP4970296B2 (en) Orthophoto image generation method and photographing apparatus
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
CN104732577B (en) A kind of building texture blending method based on UAV low-altitude aerial surveying systems
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
CN107514993A (en) The collecting method and system towards single building modeling based on unmanned plane
CN106204443A (en) A kind of panorama UAS based on the multiplexing of many mesh
KR101150510B1 (en) Method for Generating 3-D High Resolution NDVI Urban Model
CN109900274B (en) Image matching method and system
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN104732557A (en) Color point cloud generating method of ground laser scanner
CN117274499B (en) Unmanned aerial vehicle oblique photography-based steel structure processing and mounting method
CN110322541A (en) A method of selecting optimal metope texture from five inclined cameras
CN101545775A (en) Method for calculating orientation elements of photo and the height of building by utilizing digital map
CN108269234A (en) A kind of lens of panoramic camera Attitude estimation method and panorama camera
CN116129064A (en) Electronic map generation method, device, equipment and storage medium
Nasrullah Systematic analysis of unmanned aerial vehicle (UAV) derived product quality
TWI655409B (en) Route planning method for aerial photography using multi-axis unmanned aerial vehicles
CN107941241B (en) Resolution board for aerial photogrammetry quality evaluation and use method thereof
CN113362265B (en) Low-cost rapid geographical splicing method for orthographic images of unmanned aerial vehicle
Reich et al. Filling the Holes: potential of UAV-based photogrammetric façade modelling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant