CN112419402A - Positioning method and system based on multispectral image and laser point cloud - Google Patents
Positioning method and system based on multispectral image and laser point cloud Download PDFInfo
- Publication number
- CN112419402A CN112419402A CN202011359489.3A CN202011359489A CN112419402A CN 112419402 A CN112419402 A CN 112419402A CN 202011359489 A CN202011359489 A CN 202011359489A CN 112419402 A CN112419402 A CN 112419402A
- Authority
- CN
- China
- Prior art keywords
- image
- laser point
- point cloud
- measured object
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000004364 calculation method Methods 0.000 claims abstract description 23
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 18
- 238000013528 artificial neural network Methods 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 12
- 230000005540 biological transmission Effects 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 17
- 238000012544 monitoring process Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000013500 data storage Methods 0.000 claims description 5
- 238000003062 neural network model Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 230000004807 localization Effects 0.000 claims 7
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000004590 computer program Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000000701 chemical imaging Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The invention relates to the technical field of image positioning, in particular to a positioning method based on multispectral images and laser point clouds, which comprises the following steps: acquiring an image and position posture data of a measured object in the electric power gallery by using a multispectral image and laser point cloud method; constructing a convolutional neural network by using the image and the position posture data; and training the neural network to generate a positioning model of the measured object, and positioning the measured object. The positioning method based on the multispectral image and the laser point cloud collects the image and the position information by using the multispectral image and the laser point cloud technology, so that the acquired appearance and the position information of the measured object are more accurate, and the subsequent positioning calculation of the object is facilitated; the central position of the measured object is positioned by utilizing the convolutional neural network and the positioning model, so that the positioning accuracy is greatly improved, and the method is an automatic measuring tool, has low manpower requirement and effectively saves labor force.
Description
Technical Field
The invention relates to the technical field of image positioning, in particular to a positioning method and a system based on multispectral images and laser point clouds.
Background
In the current stage, the object measurement in the electric power corridor only depends on manual detection, a large amount of manpower is consumed, and the measurement result is not accurate enough, and the Chinese patent CN105043339A discloses a distance measurement method and a system between a power transmission line and an interference object.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a positioning method and system based on multispectral images and laser point clouds, which have high automation degree, reduce the waste of manpower and material resources and have high positioning accuracy.
In order to solve the technical problems, the invention provides the following technical scheme:
a positioning method based on multispectral images and laser point clouds comprises the following steps:
acquiring an image and position posture data of a measured object in the electric power gallery by using a multispectral image and laser point cloud method;
constructing a convolutional neural network by using the image and the position posture data;
and training the neural network to generate a positioning model of the measured object, and positioning the measured object.
Further, the step of obtaining the image and the position posture data of the object to be measured in the power corridor comprises the following steps:
the method comprises the steps of measuring the laser point coordinates of a measured object by using a laser point cloud method, converting the laser point coordinates into space rectangular coordinates, recording the hue, saturation and brightness of the measured object by using a multispectral image, and fusing the multispectral image into an image of the measured object according to the space rectangular coordinates.
Further, the convolutional neural network comprises a convolutional layer, a linear rectifying layer, a pooling layer and a full-connection layer, an image of a measured object is input into the neural network, the convolutional layer extracts three-dimensional coordinate data of the measured image, the linear rectifying layer calculates the three-dimensional coordinate data by using an activation function, the pooling layer performs region segmentation on the coordinate data and obtains the average value of the coordinate data, then global calculation is performed through the full-connection layer, image information is continuously input to perform training of a neural network model, and the neural network can be expressed as follows:
wherein: gamma rayaFor neural network output values, Wu,l,aAs filter elements, Du,v,lAnd recording image elements of the measured object for the multispectral image.
Further, the activation function is defined as:
z={0k(k≤0)(k>0)y={0(k≤0)x(k>0)
wherein: z is the calculated three-dimensional coordinate data, and k is the three-dimensional coordinate of the detected image.
Further, the establishment of a three-dimensional coordinate system is carried out with the laser point cloud equipment as the origin of coordinates, wherein (x) is seti,yi,zi) For the center coordinates of the measured object, the positioning model of the measured object can be expressed as follows:
wherein: g and j are the bottom and side diagonals of the object to be measured, respectively.
A positioning system based on multispectral images and laser point clouds comprises an image data acquisition module, an image data processing module, an image data transmission module and a three-dimensional data calculation module;
the image data processing module and the image data acquisition module and the image data processing module and the three-dimensional data calculation module are connected through the image data transmission module.
Further, the image data acquisition module comprises a multispectral image unit and a laser point cloud unit.
Further, the image data processing module comprises a convolution neural network unit and a coordinate transformation unit.
Further, the graphic data transmission module comprises a data storage unit and a transmission unit.
Furthermore, the system also comprises a remote monitoring module, and the remote monitoring module is connected with the three-dimensional data calculation module.
Further, the image data acquisition module is used for acquiring images and position information of the object to be measured in the electric power gallery; the image data processing module is connected with the image data acquisition module, and is used for constructing a convolutional neural network model by utilizing the acquired image and position information of the measured object and training the neural network model; the graphic data transmission module is used for information transmission among the modules; the three-dimensional data calculation module receives the data information of the image data processing module and carries out positioning calculation on the measured object; the remote monitoring module is connected with the three-dimensional data calculation module and is used for displaying the image information and the position result of the measured object.
Further, the multispectral image unit is used for recording image data of the measured object, including hue, saturation and brightness; the laser point cloud unit is used for collecting the position and the outline of a measured object to generate a point cloud image.
Further, the convolutional neural network unit establishes a convolutional neural network model by using the acquired image and position data to perform data training; the coordinate conversion unit is used for converting the output of the neural network into three-dimensional coordinate data, so that subsequent positioning calculation of the measured object is facilitated.
Further, the data storage unit is used for storing information; the transmission unit is used for information transmission between the modules and the units.
Compared with the prior art, the invention has the beneficial effects that:
the multispectral image and laser point cloud technology is used for collecting the image and the position information, so that the acquired appearance and the position information of the measured object are more accurate, and the subsequent positioning calculation of the object is facilitated; the central position of the measured object is positioned by utilizing the convolutional neural network and the positioning model, so that the positioning accuracy is greatly improved, and the method is an automatic measuring tool, has low manpower requirement and effectively saves labor force.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic flow chart of a positioning method based on multispectral images and laser point clouds according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The embodiment of the invention comprises the following steps:
example 1:
as shown in fig. 1, the present invention provides a positioning method based on multispectral image and laser point cloud, comprising the following steps:
s1, acquiring images and position posture data of the object to be measured in the electric power gallery by using a multispectral image and laser point cloud method;
in this embodiment, acquiring the image and the position and posture data of the object to be measured in the power corridor includes the following steps: the method comprises the steps of measuring the laser point coordinates of a measured object by using a laser point cloud method, converting the laser point coordinates into space rectangular coordinates, recording the hue, saturation and brightness of the measured object by using a multispectral image, and fusing the multispectral image into an image of the measured object according to the space rectangular coordinates.
S2, constructing a convolutional neural network by using the image and the position posture data;
in this embodiment, the convolutional neural network includes a convolutional layer, a linear rectifying layer, a pooling layer, and a full-link layer, where an image of a measured object is input to the neural network, the convolutional layer extracts three-dimensional coordinate data of the measured image, the linear rectifying layer calculates the three-dimensional coordinate data using an activation function, the pooling layer performs region segmentation on the coordinate data and averages the coordinate data, and then performs global calculation through the full-link layer, and training of a neural network model is performed without continuously inputting image information, and the neural network can be expressed as follows:
wherein: gamma rayaFor neural network output values, Wu,l,aAs filter elements, Du,v,lImage of a measured object recorded for multi-spectral imagesAn element;
in this embodiment, the activation function is defined as:
z={0k(k≤0)(k>0)y={0(k≤0)x(k>0)
wherein: z is the calculated three-dimensional coordinate data, and k is the three-dimensional coordinate of the detected image.
S3: training the neural network to generate a measured object positioning model, and positioning the measured object;
in this embodiment, the three-dimensional coordinate system is established with the laser point cloud apparatus as the origin of coordinates, wherein (x) is seti,yi,zi) The positioning model of the measured object is expressed as follows:
wherein: g and j are the bottom and side diagonals of the object to be measured, respectively.
In order to better verify and explain the technical effects adopted in the method of the present invention, a section of electric power gallery is selected for testing in the embodiment, and the real effects of the method are verified in a scientific demonstration manner.
Selecting a power corridor with the length of 100 meters, carrying out positioning test on vegetation in the power corridor, uniformly placing 5 multispectral images and laser point cloud sensors in the power corridor with the length of 100 meters, inputting the information into a convolutional neural network model for training and confirming the position information of a measured object by utilizing images and position information acquired by optical fibers and wireless transmission mobile phone sensors, and measuring 13 vegetation positions, wherein the central position coordinates of 5 plants are (8, 3, 5), (19, 1, 4), (31, -2, 5), (57, 3, 3) and (82, 1, 4), and the time for obtaining the result is 20 s; the position measurement of the vegetation in the electric corridor is carried out through manual detection, 13 vegetation are found in the corridor, the linear distances of the 5 vegetation are respectively 8.45 meters, 20.76 meters, 29.07 meters, 58.34 meters and 82.91 meters, and it can be seen that the difference between the parameter value of the x position in the central position obtained by using the method and the measured actual linear distance is within 3 meters, the difference is small, the height and position deviation value of the vegetation can be calculated according to the values of y and z, more parameter information can be provided while the accuracy is high, and the method has practicability; furthermore, the time consumed by the manual method for measuring the position of 100 meters is about 3.5 hours, and the method greatly saves the time for measuring, reduces the manpower and has practicability in terms of time.
Example 2:
the second embodiment of the invention is different from the first embodiment, and provides a positioning system based on multispectral image and laser point cloud, which comprises an image data acquisition module, an image data processing module, an image data transmission module and a three-dimensional data calculation module;
the image data processing module and the image data acquisition module and the image data processing module and the three-dimensional data calculation module are connected through the image data transmission module.
In this embodiment, the image data acquisition module includes a multispectral imaging unit and a laser point cloud unit.
In this embodiment, the image data processing module includes a convolutional neural network unit and a coordinate conversion unit.
In this embodiment, the graphics data transmission module includes a data storage unit and a transmission unit.
In this embodiment, the system further comprises a remote monitoring module, and the remote monitoring module is connected with the three-dimensional data calculation module.
In this embodiment, the image data acquisition module is configured to acquire an image and position information of an object to be measured in the power corridor; the image data processing module is connected with the image data acquisition module, and is used for constructing a convolutional neural network model by utilizing the acquired image and position information of the measured object and training the neural network model; the graphic data transmission module is used for information transmission among the modules; the three-dimensional data calculation module receives the data information of the image data processing module and carries out positioning calculation on the measured object; the remote monitoring module is connected with the three-dimensional data calculation module and is used for displaying the image information and the position result of the measured object.
In this embodiment, the multispectral imaging unit is used for recording image data of the object to be measured, including hue, saturation and brightness; the laser point cloud unit is used for collecting the position and the outline of a measured object to generate a point cloud image.
In this embodiment, the convolutional neural network unit establishes a convolutional neural network model by using the acquired image and position data, and performs data training; the coordinate conversion unit is used for converting the output of the neural network into three-dimensional coordinate data, so that subsequent positioning calculation of the measured object is facilitated.
In the present embodiment, the data storage unit is used to store information; the transmission unit is used for information transmission between the modules and the units.
It should be understood that the system provided in this embodiment relates to a connection relationship for operating the image data acquisition module, the image data processing module, the graphic data transmission module, the three-dimensional data calculation module, and the remote monitoring module, which may be implemented by, for example, running a computer-readable program and improving program data interfaces of the modules.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein. A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.
Claims (10)
1. A positioning method based on multispectral images and laser point clouds is characterized by comprising the following steps:
acquiring an image and position posture data of a measured object in the electric power gallery by using a multispectral image and laser point cloud method;
constructing a convolutional neural network by using the image and the position posture data;
and training the neural network to generate a positioning model of the measured object, and positioning the measured object.
2. The multi-spectral image and laser point cloud based localization method of claim 1 wherein obtaining image and position pose data of the object under test in the power corridor comprises the steps of:
the method comprises the steps of measuring the laser point coordinates of a measured object by using a laser point cloud method, converting the laser point coordinates into space rectangular coordinates, recording the hue, saturation and brightness of the measured object by using a multispectral image, and fusing the multispectral image into an image of the measured object according to the space rectangular coordinates.
3. The multispectral image and laser point cloud-based positioning method of claim 2, wherein the convolutional neural network comprises a convolutional layer, a linear rectifying layer, a pooling layer and a full-link layer, the image of the object to be measured is input into the neural network, the convolutional layer extracts three-dimensional coordinate data of the image to be measured, the linear rectifying layer calculates the three-dimensional coordinate data by using an activation function, the pooling layer performs region segmentation on the coordinate data and averages the coordinate data, then performs global calculation through the full-link layer, and performs training of a neural network model without continuously inputting image information, and the neural network can be expressed as follows:
wherein: gamma rayaFor neural network output values, Wu,l,aAs filter elements, Du,v,lAnd recording image elements of the measured object for the multispectral image.
4. The method of multispectral image and laser point cloud based localization according to claim 3, wherein the activation function is defined as:
z={0k(k≤0)(k>0)y={0(k≤0)x(k>0)
wherein: z is the calculated three-dimensional coordinate data, and k is the three-dimensional coordinate of the detected image.
5. The multi-spectral image and laser point cloud based localization method according to claim 4 wherein the establishment of the three-dimensional coordinate system is performed with the laser point cloud device as the origin of coordinates, wherein (x) is seti,yi,zi) For the center coordinates of the measured object, the positioning model of the measured object can be expressed as follows:
wherein: g and j are the bottom and side diagonals of the object to be measured, respectively.
6. A positioning system based on multispectral images and laser point clouds is characterized by comprising an image data acquisition module, an image data processing module, an image data transmission module and a three-dimensional data calculation module;
the image data processing module and the image data acquisition module and the image data processing module and the three-dimensional data calculation module are connected through the image data transmission module.
7. The multi-spectral image and laser point cloud based localization system of claim 6 wherein said image data acquisition module comprises a multi-spectral image unit and a laser point cloud unit.
8. The multi-spectral image and laser point cloud based localization system of claim 7 wherein said image data processing module comprises a convolutional neural network unit and a coordinate transformation unit.
9. The multispectral image and laser point cloud-based localization system of claim 8, wherein the graphical data transmission module comprises a data storage unit and a transmission unit.
10. The multi-spectral image and laser point cloud based localization system of claim 9 further comprising a remote monitoring module connected to said three-dimensional data calculation module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011359489.3A CN112419402A (en) | 2020-11-27 | 2020-11-27 | Positioning method and system based on multispectral image and laser point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011359489.3A CN112419402A (en) | 2020-11-27 | 2020-11-27 | Positioning method and system based on multispectral image and laser point cloud |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112419402A true CN112419402A (en) | 2021-02-26 |
Family
ID=74842217
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011359489.3A Pending CN112419402A (en) | 2020-11-27 | 2020-11-27 | Positioning method and system based on multispectral image and laser point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112419402A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139730A (en) * | 2021-04-27 | 2021-07-20 | 浙江悦芯科技有限公司 | Power equipment state evaluation method and system based on digital twin model |
CN114820800A (en) * | 2022-06-29 | 2022-07-29 | 山东信通电子股份有限公司 | Real-time inspection method and equipment for power transmission line |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056591A (en) * | 2016-05-25 | 2016-10-26 | 哈尔滨工业大学 | Method for estimating urban density through fusion of optical spectrum image and laser radar data |
CN106767777A (en) * | 2016-11-16 | 2017-05-31 | 江苏科技大学 | A kind of underwater robot redundancy inertial navigation device |
CN107204037A (en) * | 2016-03-17 | 2017-09-26 | 中国科学院光电研究院 | 3-dimensional image generation method based on main passive 3-D imaging system |
CN208187418U (en) * | 2018-06-08 | 2018-12-04 | 广东电网有限责任公司 | A kind of power-line patrolling equipment being mounted in unmanned plane |
CN108981569A (en) * | 2018-07-09 | 2018-12-11 | 南京农业大学 | A kind of high-throughput hothouse plants phenotype measuring system based on the fusion of multispectral cloud |
CN109920007A (en) * | 2019-01-26 | 2019-06-21 | 中国海洋大学 | Three-dimensional image forming apparatus and method based on multispectral photometric stereo and laser scanning |
CN110687548A (en) * | 2019-10-25 | 2020-01-14 | 威海海洋职业学院 | Radar data processing system based on unmanned ship |
CN110796700A (en) * | 2019-10-21 | 2020-02-14 | 上海大学 | Multi-object grabbing area positioning method based on convolutional neural network |
CN111340797A (en) * | 2020-03-10 | 2020-06-26 | 山东大学 | Laser radar and binocular camera data fusion detection method and system |
-
2020
- 2020-11-27 CN CN202011359489.3A patent/CN112419402A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107204037A (en) * | 2016-03-17 | 2017-09-26 | 中国科学院光电研究院 | 3-dimensional image generation method based on main passive 3-D imaging system |
CN106056591A (en) * | 2016-05-25 | 2016-10-26 | 哈尔滨工业大学 | Method for estimating urban density through fusion of optical spectrum image and laser radar data |
CN106767777A (en) * | 2016-11-16 | 2017-05-31 | 江苏科技大学 | A kind of underwater robot redundancy inertial navigation device |
CN208187418U (en) * | 2018-06-08 | 2018-12-04 | 广东电网有限责任公司 | A kind of power-line patrolling equipment being mounted in unmanned plane |
CN108981569A (en) * | 2018-07-09 | 2018-12-11 | 南京农业大学 | A kind of high-throughput hothouse plants phenotype measuring system based on the fusion of multispectral cloud |
CN109920007A (en) * | 2019-01-26 | 2019-06-21 | 中国海洋大学 | Three-dimensional image forming apparatus and method based on multispectral photometric stereo and laser scanning |
CN110796700A (en) * | 2019-10-21 | 2020-02-14 | 上海大学 | Multi-object grabbing area positioning method based on convolutional neural network |
CN110687548A (en) * | 2019-10-25 | 2020-01-14 | 威海海洋职业学院 | Radar data processing system based on unmanned ship |
CN111340797A (en) * | 2020-03-10 | 2020-06-26 | 山东大学 | Laser radar and binocular camera data fusion detection method and system |
Non-Patent Citations (1)
Title |
---|
燕正亮;张洁;王晓龙;徐瑞;许杰;: "多光谱激光雷达系统研究与实践", 测绘技术装备, no. 01 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139730A (en) * | 2021-04-27 | 2021-07-20 | 浙江悦芯科技有限公司 | Power equipment state evaluation method and system based on digital twin model |
CN114820800A (en) * | 2022-06-29 | 2022-07-29 | 山东信通电子股份有限公司 | Real-time inspection method and equipment for power transmission line |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108090959B (en) | Indoor and outdoor integrated modeling method and device | |
JP2017182695A (en) | Information processing program, information processing method, and information processing apparatus | |
CN112686877B (en) | Binocular camera-based three-dimensional house damage model construction and measurement method and system | |
CN112419402A (en) | Positioning method and system based on multispectral image and laser point cloud | |
CN111504195A (en) | Laser point cloud based electric power infrastructure acceptance method and device | |
CN117392241B (en) | Sensor calibration method and device in automatic driving and electronic equipment | |
CN112837604B (en) | Method and device for determining geographic coordinates of target point in map | |
CN111598097B (en) | Instrument position and reading identification method and system based on robot vision | |
CN111260735B (en) | External parameter calibration method for single-shot LIDAR and panoramic camera | |
CN113705350A (en) | Pointer instrument reading identification method and device for transformer substation, medium and electronic equipment | |
CN115019216B (en) | Real-time ground object detection and positioning counting method, system and computer | |
CN114531700B (en) | Non-artificial base station antenna work parameter acquisition system and method | |
CN113628284B (en) | Pose calibration data set generation method, device and system, electronic equipment and medium | |
CN112507838B (en) | Pointer meter identification method and device and electric power inspection robot | |
CN115272452A (en) | Target detection positioning method and device, unmanned aerial vehicle and storage medium | |
CN114037993A (en) | Substation pointer instrument reading method and device, storage medium and electronic equipment | |
JP2022136365A (en) | Plant evaluation device, plant evaluation method, and program | |
CN116266402A (en) | Automatic object labeling method and device, electronic equipment and storage medium | |
CN113421292A (en) | Three-dimensional modeling detail enhancement method and device | |
CN116993803B (en) | Landslide deformation monitoring method and device and electronic equipment | |
CN115222799B (en) | Method and device for acquiring image gravity direction, electronic equipment and storage medium | |
US20220277474A1 (en) | System and method for geo-referencing object on floor | |
CN114524097B (en) | Survey unmanned aerial vehicle survey device for survey and drawing geographic information | |
CN117496045A (en) | Automatic model generation device, method and equipment based on cable image | |
CN114676638A (en) | Power line modeling method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |