CN112053425B - Multispectral image processing method and device and electronic equipment - Google Patents

Multispectral image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112053425B
CN112053425B CN202011051731.0A CN202011051731A CN112053425B CN 112053425 B CN112053425 B CN 112053425B CN 202011051731 A CN202011051731 A CN 202011051731A CN 112053425 B CN112053425 B CN 112053425B
Authority
CN
China
Prior art keywords
image
target image
channels
target
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011051731.0A
Other languages
Chinese (zh)
Other versions
CN112053425A (en
Inventor
池鹏可
邓杭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202011051731.0A priority Critical patent/CN112053425B/en
Publication of CN112053425A publication Critical patent/CN112053425A/en
Application granted granted Critical
Publication of CN112053425B publication Critical patent/CN112053425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image

Abstract

The embodiment of the invention provides a multispectral image processing method and device and electronic equipment, and relates to the technical field of images. The multispectral image processing method comprises the steps of constructing a multidimensional grid model based on a plurality of frames of first images; the multi-dimensional grid model comprises a plurality of grid surfaces; respectively evaluating a first target image matched with each grid surface from the first image, and recording a first corresponding relation between the grid surface and the first target image; and according to the first target image, determining a second target image from multiple frames of second images acquired by other channels in the multiple waveband channels, and according to the first corresponding relation and the image posture information of the second target image, projecting each second target image of other channels to a corresponding grid surface in the multi-dimensional grid model to obtain texture models corresponding to other channels. Therefore, the calculation amount of texture models corresponding to a plurality of band channels is reduced, the calculation time consumption is saved, and the performance is improved.

Description

Multispectral image processing method and device and electronic equipment
Technical Field
The invention relates to the technical field of images, in particular to a multispectral image processing method and device and electronic equipment.
Background
With the development of image technology, remote sensing technology is gradually mature and widely applied to many industries. At present, the multiband spectral image has the advantage of abundant remote sensing information, and is widely applied to the technical field of remote sensing.
However, creating a corresponding digital ortho image for each band of spectral images requires a significant amount of computation. Thus, when constructing corresponding digital ortho images for spectral images of a plurality of wavelength bands, the amount of computation is doubled, and the processing time consumption is doubled.
Disclosure of Invention
In view of the above, the present invention provides a multispectral image processing method, a multispectral image processing device, and an electronic apparatus, which reduce the computation amount for processing multiband spectral data and reduce the processing time consumption.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a multispectral image processing method, where the multispectral image processing method includes: constructing a multi-dimensional grid model based on a plurality of frames of first images; the first image is image data collected by a preselected channel in a plurality of wave band channels; the multi-dimensional mesh model comprises a plurality of mesh surfaces; respectively evaluating a first target image matched with each grid surface from the first image, and recording a first corresponding relation between the grid surface and the first target image; determining a second target image from a plurality of frames of second images acquired from other channels in the plurality of wave band channels according to the first target image, wherein the second target image and the first target image have the same acquisition time; and projecting each second target image of the other channels to a corresponding grid surface in a multi-dimensional grid model according to the first corresponding relation and the image posture information of the second target image so as to obtain texture models corresponding to the other channels.
In a second aspect, an embodiment of the present invention provides a multispectral image processing device, including: the construction module is used for constructing a multi-dimensional grid model based on the multi-frame first image; the first image is image data collected by a preselected channel in a plurality of wave band channels; the multi-dimensional mesh model comprises a plurality of mesh surfaces; the evaluation module is used for respectively evaluating a first target image matched with each grid surface from the first image and recording a first corresponding relation between the grid surface and the first target image; the determining module is used for determining a second target image from a plurality of frames of second images acquired by other channels in the plurality of wave band channels according to the first target image, wherein the second target image has the same acquisition time as the first target image; and the mapping module is used for projecting each second target image of the other channels to a corresponding grid surface in a multi-dimensional grid model according to the first corresponding relation and the image posture information of the second target image so as to obtain texture models corresponding to the other channels.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the system comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium communicate through the bus, and the processor executes the machine-readable instructions to execute the steps of the method of the embodiment.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method described in the above embodiments.
Compared with the prior art, the multispectral image processing method provided by the embodiment of the invention comprises the following steps: firstly, a multi-dimensional grid model which is visible for all wave band channels is constructed based on a plurality of frames of first images collected by preselected channels in a plurality of wave band channels, then, first target images matched with each grid surface of the multi-dimensional grid model are respectively evaluated from the first images, and a first corresponding relation between the grid surface and the first target images is recorded. And determining a second target image from a plurality of frames of second images acquired from other channels in the plurality of wave band channels according to the first target image. In this way, a large amount of operation processing is not required to be performed on data acquired by other channels, the first corresponding relation and the image posture information of the second target image are directly obtained, and each second target image of other channels is projected to a corresponding grid surface in the multi-dimensional grid model, so that texture models corresponding to other channels can be obtained. The processing operation amount of the multispectral image is greatly reduced, and the processing time is effectively shortened.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a schematic diagram of an electronic device provided by an embodiment of the present invention.
Fig. 2 shows one of the steps of the multispectral image processing method provided by the embodiment of the present invention.
Fig. 3 shows an example of the first correspondence relationship mentioned in the embodiment of the present invention.
Fig. 4 shows another example of the first correspondence relationship mentioned in the embodiment of the present invention.
Fig. 5 is a flowchart illustrating sub-steps of step S101 in fig. 2.
Fig. 6 is a flowchart illustrating sub-steps of step S102 in fig. 2.
Fig. 7 is a flowchart illustrating sub-steps of step S104 in fig. 2.
Fig. 8 is a second flowchart illustrating the sub-steps of step S104 in fig. 2.
Fig. 9 is a diagram illustrating an example of determining the second correspondence in the embodiment of the present invention.
Fig. 10 is a flowchart illustrating a second step of the multispectral image processing method according to the embodiment of the present invention.
Fig. 11 shows a third step of the multispectral image processing method according to the embodiment of the present invention.
Fig. 12 is a schematic diagram of a multispectral image processing apparatus according to an embodiment of the present invention.
Icon: 100-an electronic device; 110-a memory; 120-a processor; 130-a communication module; 300-a multispectral image processing device; 301-a building block; 302-an evaluation module; 303-a determination module; 304-mapping module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Creating digital orthoimages is an important step in remote sensing technology. However, generating digital ortho images is complex and requires more computational resource support. Specifically, dense three-dimensional point clouds corresponding to the same wave band are calculated according to the spectral images collected under the same wave band, the obtained dense three-dimensional point clouds are processed by using an interpolation algorithm to obtain an elevation model, and finally, digital orthoimages corresponding to the wave band are calculated based on the elevation model.
In the above process, the calculation amount required for calculating the dense three-dimensional point cloud is large and time is consumed. In addition, when the digital orthotropic effect corresponding to a plurality of wave bands is calculated, the above process is repeated for a plurality of times, thereby generating multiplied computation amount and computation time consumption.
In order to solve the above problem, embodiments of the present invention provide a multispectral image processing method and apparatus, and an electronic device.
Fig. 1 is a block diagram of an electronic device 100. The electronic device 100 may be, but is not limited to, a work device, an intelligent terminal (e.g., a ground station, a mobile phone), and a server.
In some embodiments, the working device may be an aerial device (such as a drone) carrying the multispectral camera, or may be the multispectral camera itself.
The multispectral camera is used for acquiring multiband spectral images, and is provided with a plurality of independent imagers, each imager is respectively provided with a special optical filter, so that different imagers can sense spectrums in different wavelength ranges, and images in different spectral bands such as Red (Red), Green (Green), Blue (Blue), Red edge (Rededge), Near Infrared (NIR) and the like can be photographed and acquired. Since each imager can sense and output a spectral image of a wavelength band, for convenience of description, a combination of each imager, a corresponding filter, and an image output interface is used as a wavelength band channel. And each time the multispectral camera shoots, each wave band channel can synchronously acquire images, so that multispectral images are obtained. In other words, the multispectral image may include image data acquired by a plurality of wavelength channels of the multispectral camera in synchronization a plurality of times.
In some embodiments, when the electronic device 100 is an aerial device equipped with a multispectral camera, the electronic device 100 may execute the multispectral image processing method provided by the embodiments of the present invention after receiving the image data collected by the multispectral camera.
In another embodiment, it is also possible that the electronic device 100 is a multi-spectral camera. After the multispectral image is acquired, the multispectral image processing method provided by the embodiment of the invention is executed based on the acquired multispectral image.
In other embodiments, the electronic device 100 may also be an intelligent terminal or a server in communication connection with an aerial device equipped with a multispectral camera, so that when the intelligent terminal or the server acquires multispectral images collected by the same multispectral camera from the outside, the multispectral image processing method provided in the embodiments of the present invention is executed.
Optionally, as shown in fig. 1, the electronic device 100 includes a memory 110, a processor 120, and a communication module 130. The memory 110, the processor 120 and the communication module 130 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 110 is used to store programs or data. The Memory 110 may be, but is not limited to, a Random Access Memory 110 (RAM), a Read Only Memory 110 (ROM), a Programmable Read Only Memory 110 (PROM), an Erasable Read Only Memory 110 (EPROM), an electrically Erasable Read Only Memory 110 (EEPROM), and the like.
The processor 120 is used to read/write data or programs stored in the memory 110 and perform corresponding functions.
The communication module 130 is configured to establish a communication connection between the electronic device 100 and another communication terminal through the network, and to transmit and receive data through the network.
Referring to fig. 2, an embodiment of the invention provides a multispectral image processing method. As shown in fig. 2, the multispectral image processing method includes the following steps:
step S101, constructing a multi-dimensional grid model based on a plurality of frames of first images.
The first image is image data acquired by a preselected channel of a plurality of band channels. For example, the activated multispectral camera includes five wavelength band channels, such as red, green, blue, red edge, near infrared, and the like, and the green wavelength band channel may be used as the preselected channel, so that the first image collected by the preselected channel is the image data collected by the green wavelength band channel of the multispectral camera.
The multi-dimensional mesh model may be a mesh model for characterizing the appearance of the target object as presented by the first image. The multidimensional grid model can comprise 2.5D mesh and can also comprise 3D mesh. The multi-dimensional grid model is composed of a plurality of grid surfaces. That is, the multi-dimensional mesh model may be a model obtained by splicing a plurality of mesh surfaces, and is used for representing the appearance of the target object. In some embodiments, the multidimensional network model may be a multidimensional triangular mesh model. In this case, the multi-dimensional triangular mesh model includes a plurality of triangular surfaces.
The target object may be something present in a multispectral camera field of view, for example, the target object may be a farmland, a city area, or the like.
It should be noted that the target objects represented by the image data collected by the multiple band channels of the same multispectral camera are the same. Therefore, the multi-dimensional grid model constructed based on the first image acquired by the pre-selected channel is visible for all the band channels. The above visualization may be understood as generic, i.e. the multidimensional grid model constructed using image data acquired by other channels is the same as the multidimensional grid model constructed based on the first image acquired by the preselected channel.
The other channel may be a band channel other than the pre-selected channel among the plurality of band channels.
In some embodiments, a three-dimensional reconstruction may be performed based on the first image, and a multi-dimensional mesh model may be created based on results from the three-dimensional reconstruction (e.g., a sparse three-dimensional point cloud of a pos and the entire scene).
Step S102, first target images matched with each grid surface are respectively evaluated from the first images, and first corresponding relations between the grid surfaces and the first target images are recorded.
The first target image is a concept with respect to a mesh plane, for example, the first image a is a first target image for the mesh plane a, but the first image a may not be the first target image for the mesh plane b. That is, the first target images corresponding to different mesh surfaces may be different. Of course, there is also a first target image in which the same first image is determined as a plurality of mesh planes. For example, the first image c is a first target image for the mesh plane a, but the first image c is also a first target image for the mesh plane b. Further, each mesh plane may also be for a plurality of first target images. Simply, whether a frame of the first image can be used as the first target image of a mesh plane or not is crucial to the degree of matching between the mesh plane and the first image.
In some embodiments, the first target image corresponding to each mesh plane is determined by multi-view texture mapping. After each mesh plane and its corresponding first target image are determined, a first correspondence characterizing the association between the two is established. In some embodiments, the above-mentioned first correspondence of a single mesh plane may be a one-to-many correspondence, such as the correspondence between mesh plane a and the plurality of first target images shown in fig. 3. In other embodiments, the above-mentioned first correspondence of a single mesh plane may be represented by at least one-to-one correspondence, such as the correspondence between mesh plane a and the plurality of first target images shown in fig. 4.
And step S103, determining a second target image from a plurality of frames of second images acquired from other channels in the plurality of band channels according to the first target image.
The second target image is image data acquired by other channels (i.e., second images acquired by other channels), and the second target image and the first target image have the same acquisition time.
In some embodiments, the second target image is determined from the image data acquired by other channels by taking the acquisition time as a link, based on the characteristic that the image contents of the image data acquired by different wave band channels at the same time point are nearly the same.
And step S104, according to the first corresponding relation and the image posture information of the second target image, projecting each second target image of other channels to a corresponding grid surface in the multi-dimensional grid model to obtain texture models corresponding to other channels.
In some embodiments, the second target image is mapped to the corresponding mesh surface based on image POSE information (POSE) of the second target image, and the texture models corresponding to other channels of the second target image can be obtained after texture mapping of all the mesh surfaces is completed by using the second target image. The image posture information of the second target image can be obtained through three-dimensional reconstruction. For example, it is determined that the second target image corresponding to the mesh surface a is the image a or the image b according to the second correspondence, and then texture information to be mapped to the mesh surface a is extracted from the image a or the image b according to the image pose information of the image a or the image b, and is attached to the mesh surface a.
Therefore, when the demand of obtaining texture models corresponding to all the wave band channels is met, compared with the prior art that a series of operations such as three-dimensional reconstruction, dense point cloud generation, elevation model calculation, orthorectification and the like are required to be sequentially performed on the image data of each wave band channel, the embodiment of the invention only needs to perform the series of operations on the image data of one wave band channel, and the texture models corresponding to other channels can be obtained by simply performing texture mapping by using the corresponding relation.
Obviously, the computation of the related art is multiple times of that of the method provided by the embodiment of the present invention, in other words, the embodiment of the present invention effectively reduces the computation of constructing texture models corresponding to a plurality of band channels, and the advantage is more obvious in a scene where the more band channels are and the larger the amount of acquired image data is. In addition, the time consumption for generating texture models corresponding to a plurality of wave band channels can be effectively saved, and the efficiency is improved.
It is understood that in some scenarios, one of the other channels may exist, and in other scenarios, a plurality of the other channels may exist. However, the processing principle (for example, establishing the corresponding second correspondence and constructing the corresponding texture model) of the image data acquired by any one of the other channels is the same, and therefore, for convenience of description, the embodiment of the present invention mainly takes one of the other channels as an example for description. If a scene with a plurality of other channels exists, the image data acquired by each other channel can be sequentially processed according to the method described in the embodiment of the present invention.
The details of embodiments of the invention are described below:
in some embodiments, in step S101, a multi-dimensional grid model visible to all the band channels is created by using a characteristic that image pose information and dense point clouds of image data acquired by a plurality of band channels are displayed in the same world coordinate system. As shown in fig. 5, the step S101 may include the following sub-steps:
and a substep S101-1, performing three-dimensional reconstruction processing on the first image to obtain a sparse point cloud.
The first image is acquired by a preselected channel. In some embodiments, each band channel labels the acquired image data, and thus, the first image may be acquired by screening the image data acquired by the multispectral camera according to the label corresponding to the preselected channel. For example, the band channel from which the image is acquired may be marked in a band _ idx (sub-band index) of the acquired image.
In some embodiments, a (structure-from-motion, SFM) three-dimensional reconstruction may be performed on each frame of the first image, so as to obtain image pose information corresponding to the first image and a sparse point cloud of the target object shown in the first image.
And a substep S101-2 of generating dense three-dimensional point cloud corresponding to the scene according to the obtained sparse point cloud.
In some embodiments, a dense three-dimensional point cloud may be constructed from the sparse point cloud and the image pose information for each first image. For example, the dense three-dimensional point cloud may be calculated by using a (multi view stereo, MVS) multi-view stereo matching algorithm according to the sparse point cloud and the image pose information of each first image.
It will be appreciated that the above sub-steps S101-1 and S101-2 are intended to create a dense three-dimensional point cloud of the target object represented by the first image, and therefore, other ways of constructing a dense three-dimensional point cloud may be used instead.
And a substep S101-3, performing interpolation processing on the dense three-dimensional point cloud to obtain an elevation model.
The elevation model may also be referred to as a digital surface model, which is a ground elevation model including the heights of ground surface buildings, bridges, trees, and the like.
In some embodiments, the interpolation process may be, but is not limited to, processing by using any one of an inverse-weighted interpolation algorithm (IDW), a nearest-neighbor interpolation algorithm (nearest-neighbor), and a delaunay triangularization interpolation algorithm (delaunay triangularization).
And a substep S101-4, calculating a multi-dimensional grid model according to the elevation model.
In some embodiments, the elevation Model may be a Digital Surface Model (DSM), i.e., a two-dimensional elevation image. Three-dimensional coordinates representing each point in the real space can be obtained from the DSM, and the Delaunay triangular mesh can be constructed by using the obtained three-dimensional points.
In some embodiments, the purpose of step S102 is to obtain the best first image with texture mapped onto each mesh plane. As shown in fig. 6, the step S102 may include the following sub-steps:
and a substep S102-1, calculating the cost value generated by projecting the first image of each frame to each grid surface according to the image posture information corresponding to the first image.
In some embodiments, the cost value corresponding to mapping each frame of the first image to each mesh plane may be calculated separately. For example, there are mesh surfaces a, b, c, and respectively calculate a cost value corresponding to the projection of each first image onto the mesh surface a; calculating a cost value corresponding to the projection of each first image to the grid surface b; and calculating a cost value corresponding to the projection of each first image to the grid surface c. Of course, the process of calculating the projection of the first image of each frame onto a different mesh plane may also be a process performed in parallel.
In some embodiments, the first image that can be projected onto each mesh plane may be evaluated first, and then the cost value for projecting the evaluated first image onto the mesh plane may be calculated in turn. For example, if there are mesh surfaces a, b, and c, a first image that can be mapped to the mesh surface a is estimated, and then a cost value for mapping the estimated first image to the mesh surface a is calculated, which is the same for the mesh surfaces b and c.
In the embodiment of the present invention, the way of calculating the cost value of mapping the first image to the mesh plane is: first, it is evaluated whether the first image is visible to the mesh plane, and if the first image is not visible to the mesh plane, the cost value of the first image with respect to the mesh plane is 0. If the first image is visible to the grid surface, the grid area, gradient value and color average value color of the first image projected to the grid surface are obtained. Based on the grid area, the gradient value gradient in the grid and the color average value color, a formula is utilized:
cost=weight1*area+weight2*gradient+weight3*color
and calculating the corresponding cost value. Wherein cost represents the cost value, and weight1, wigtht2 and weight3 represent the preset weight values.
And a substep S102-2 of screening out a first target image matched with the grid surface based on the cost value corresponding to the first image of each frame.
In some embodiments, a first image projected best to each mesh plane may be screened out using markov random field optimization as a first target image matching the mesh plane.
For example, using the formula:
Figure BDA0002709774320000111
and carrying out Markov random field optimization. Wherein, Face represents a grid surface, FaceiRepresents the ith mesh surface, labeliRepresents the ith first image, E (label) represents that one frame of the first image exists, so that the cost value generated by projecting the first image to the grid surface is minimum, E (label) represents the cost value generated by projecting the first image to the grid surfacedata() Representing representative data items, characterizing each frameCost value of projection of a first image onto a grid surface, Esmooth(Facei,Facej,labeli,labelj) A smoothing constraint term is represented to ensure that the textures of two adjacent mesh surfaces come from different frames of the first image, and also to keep the colors of the two mesh textures as consistent as possible.
In some embodiments, the purpose of step S104 is to reduce the amount of computation and time consumption required for projecting the image information acquired by other channels onto the mesh surface. In some embodiments, as shown in fig. 7, the step S104 may further include the following sub-steps:
and a substep S104-1, obtaining a target grid surface corresponding to the first target image according to each first corresponding relation.
The target mesh plane is a mesh plane having a first correspondence relationship with the first target image. It is understood that each set of first correspondences is composed of the first object image and the mesh plane. Thus, the target mesh surface corresponding to the first target image can be obtained by utilizing the first corresponding relation corresponding to the first target image.
And a substep S104-2 of projecting the second target image corresponding to the first target image to the target mesh surface based on the image posture information of the second target image.
In some embodiments, the second target images are projected onto the target mesh planes of the first target images corresponding to the second target images, respectively. Thus, after the texture mapping of all the grid surfaces in the multi-dimensional grid model is completed by using the second target image, the texture models corresponding to other channels can be obtained. In other embodiments, as shown in fig. 8, the step S104 may further include the following sub-steps:
and a substep S104-3 of establishing a second corresponding relationship between the grid surface and the second target image according to the first corresponding relationship and the acquisition time corresponding to the first target image.
For example, in fig. 9, the grid surface a corresponds to the first target image with the acquisition time of (11:20:10), so that the corresponding relationship between the grid surface a and the second target image with the acquisition time of (11:20:10), that is, the second corresponding relationship, can be established.
And a substep S104-4, projecting the second target image to a corresponding grid surface in the multi-dimensional grid model according to the second corresponding relation and the image posture information of the second target image.
Firstly, determining a grid surface corresponding to each frame of the second target image according to the second corresponding relation. Second, the second target image is texture mapped to the corresponding mesh surface based on the image pose information of the second target image.
In some embodiments, the purpose of the above step S104 is to simplify the process of generating texture models corresponding to other bands. It can be understood that, by the above step S104, the optimal image corresponding to each mesh surface is selected without performing preprocessing (constructing dense point cloud, creating elevation model, etc.) on the data acquired by other channels, calculating a cost value, and performing Markov Random Field (MRF) optimization. Generally speaking, the process of constructing dense point clouds is a time-consuming and computation-intensive process, and the step S104 avoids the process from being repeatedly executed, thereby effectively reducing computation and time consumption.
Besides obtaining texture models corresponding to other channels, it is actually necessary to obtain a network model corresponding to a preselected channel, and therefore, as shown in fig. 10, the multispectral image processing method may further include the steps of:
step S201, performing texture mapping processing on each mesh surface of the multi-dimensional mesh model by using the first corresponding relationship and the first target image to obtain a texture model corresponding to the preselected channel.
In some embodiments, the first corresponding relationship is used to determine a mesh surface corresponding to the first target image, and the texture of the first target image is mapped to the mesh surface corresponding to the first target image based on the image pose information of the first target image.
It should be noted that, constructing the texture model of the preselected channel and constructing the texture models of other channels both project the acquired image data onto the same multidimensional grid model, so as to ensure that the sizes of the obtained texture models corresponding to the channels of each band are consistent, and ensure that the size resolution of the orthoimages (i.e., the texture models) corresponding to the channels of different bands are consistent and the coordinates of each pixel point are consistent, so that the orthoimages corresponding to the channels of different bands are completely aligned.
In addition, the texture models corresponding to the channels with different wave bands are generated, so that the texture models have a plurality of functions in practical application, for example, when the texture models are applied to agricultural remote sensing, the normalized vegetation indexes corresponding to farmlands can be calculated by using the texture models. In some embodiments, as shown in fig. 11, the multispectral image processing method may further include:
step S202, calculating corresponding normalized vegetation indexes based on texture models corresponding to different wave band channels.
In some embodiments, the orthoimages corresponding to each band channel may be generated based on the texture model corresponding to each band channel. Then, acquiring the orthoimages corresponding to two different wave band channels, and using a formula:
Figure BDA0002709774320000131
and calculating the normalized vegetation index. NDVI represents a normalized vegetation index, NIR represents an ortho-image corresponding to one waveband channel, and RED represents an ortho-image corresponding to another waveband channel.
It is understood that, in the embodiment of the present invention, texture models corresponding to an orthoimage used for calculating a normalized vegetation index are generated based on the same multidimensional grid model, and the acquisition time points of image data corresponding to the same grid surface in different texture models are the same, that is, it is ensured that image data representing the same position of a target object in different texture models are mapped from image data acquired at the same time, so that a large error between the obtained normalized vegetation index and a true value is avoided.
In order to perform the corresponding steps in the above embodiments and various possible manners, an implementation manner of the multispectral image processing apparatus 300 is given below, and optionally, the multispectral image processing apparatus 300 may adopt the device structure of the electronic device 100 shown in fig. 1. Referring to fig. 12, fig. 12 is a functional block diagram of a multispectral image processing apparatus 300 according to an embodiment of the present invention. It should be noted that the multispectral image processing apparatus 300 provided in the present embodiment has the same basic principle and technical effects as those of the above embodiments, and for the sake of brief description, no part of the present embodiment is mentioned, and reference may be made to the corresponding contents in the above embodiments. The multispectral image processing apparatus 300 includes: a construction module 301, an evaluation module 302, a determination module 303, and a mapping module 304.
A building module 301, configured to build a multi-dimensional grid model based on multiple frames of first images; the first image is image data collected by a preselected channel in a plurality of wave band channels; the multi-dimensional mesh model includes a plurality of mesh surfaces.
In some embodiments, step S101 described above may be performed by the building module 301.
An evaluation module 302, configured to evaluate a first target image matching each of the grid surfaces from the first images, respectively, and record a first corresponding relationship between the grid surface and the first target image.
In some embodiments, step S102 described above may be performed by the evaluation module 302.
A determining module 303, configured to determine, according to the first target image, a second target image from multiple frames of second images acquired by other channels in the multiple band channels, where the second target image has the same acquisition time as the first target image.
In some embodiments, step S103 described above may be performed by the determination module 303.
A mapping module 304, configured to project each second target image of the other channels to a corresponding grid surface in the multi-dimensional grid model according to the first corresponding relationship and the image posture information of the second target image, so as to obtain texture models corresponding to the other channels.
In some embodiments, step S104 described above may be performed by the mapping module 304.
In some embodiments, the multispectral image processing device 300 further comprises:
the mapping module 304 is further configured to perform texture mapping processing on each grid surface of the multi-dimensional grid model by using the first corresponding relationship and the first target image to obtain a texture model corresponding to the preselected channel;
and the calculation module is used for calculating the corresponding normalized vegetation index based on the texture models corresponding to the channels of different wave bands.
In some embodiments, the mapping module 304 further comprises:
the obtaining submodule is used for obtaining a target grid surface corresponding to the first target image according to each first corresponding relation;
and the projection submodule is used for projecting the second target image corresponding to the first target image to the target grid surface based on the image posture information of the second target image.
In some embodiments, the mapping module 304 is specifically configured to:
establishing a second corresponding relation between the grid surface and the second target image according to the first corresponding relation and the acquisition time corresponding to the first target image;
and projecting the second target image to a corresponding grid surface in the multi-dimensional grid model according to the second corresponding relation and the image posture information of the second target image.
In some embodiments, the building module 301 is further configured to:
performing three-dimensional reconstruction processing on the first image to obtain a sparse point cloud;
generating dense three-dimensional point cloud corresponding to a scene according to the acquired sparse point cloud;
performing interpolation processing on the dense three-dimensional point cloud to obtain an elevation model;
and calculating the multi-dimensional grid model according to the elevation model.
In some embodiments, evaluation module 302 is further configured to:
calculating the cost value generated by projecting the first image of each frame to each grid surface according to the image posture information corresponding to the first image;
and screening out the first target image matched with the grid surface based on the cost value corresponding to the first image of each frame.
Alternatively, the modules may be stored in the memory 110 shown in fig. 1 in the form of software or Firmware (Firmware) or be fixed in an Operating System (OS) of the electronic device 100, and may be executed by the processor 120 in fig. 1. Meanwhile, data, codes of programs, and the like required to execute the above-described modules may be stored in the memory 110.
In summary, the embodiments of the present invention provide a multispectral image processing method and apparatus, and an electronic device. The multispectral image processing method comprises the steps of constructing a multidimensional grid model based on a plurality of frames of first images; the first image is image data collected by a preselected channel in a plurality of wave band channels; the multi-dimensional mesh model comprises a plurality of mesh surfaces; respectively evaluating a first target image matched with each grid surface from the first image, and recording a first corresponding relation between the grid surface and the first target image; determining a second target image from a plurality of frames of second images acquired from other channels in the plurality of wave band channels according to the first target image, wherein the second target image and the first target image have the same acquisition time; and projecting each second target image of the other channels to a corresponding grid surface in a multi-dimensional grid model according to the first corresponding relation and the image posture information of the second target image so as to obtain texture models corresponding to the other channels. Therefore, the calculation amount of texture models corresponding to a plurality of band channels is reduced, the calculation time consumption is saved, and the performance is improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A multispectral image processing method, the multispectral image processing method comprising:
constructing a multi-dimensional grid model based on a plurality of frames of first images; the first image is image data collected by a preselected channel in a plurality of wave band channels; the multi-dimensional mesh model comprises a plurality of mesh surfaces;
calculating the cost value generated by projecting each frame of the first image to each grid surface according to the image posture information corresponding to the first image;
screening out a first target image matched with the grid surface based on the cost value corresponding to each frame of the first image, and recording a first corresponding relation between the grid surface and the first target image;
determining a second target image from a plurality of frames of second images acquired from other channels in the plurality of band channels according to the first target image, wherein the second target image and the first target image have the same acquisition time;
and projecting each second target image of the other channels to a corresponding grid surface in a multi-dimensional grid model according to the first corresponding relation and the image posture information of the second target image so as to obtain texture models corresponding to the other channels.
2. The multispectral image processing method according to claim 1, further comprising:
performing texture mapping processing on each grid surface of the multi-dimensional grid model by using the first corresponding relation and the first target image to obtain a texture model corresponding to the preselected channel;
and calculating corresponding normalized vegetation indexes based on texture models corresponding to different wave band channels.
3. The method for multispectral image processing as claimed in claim 1, wherein the step of projecting each second target image of the other channels to a corresponding mesh plane in the multidimensional mesh model according to the first corresponding relationship and the image pose information of the second target image comprises:
acquiring a target grid surface corresponding to the first target image according to each first corresponding relation;
and projecting the second target image corresponding to the first target image to the target grid surface based on the image posture information of the second target image.
4. The method for multispectral image processing as claimed in claim 1, wherein the step of projecting each second target image of the other channels to a corresponding mesh plane in the multidimensional mesh model according to the first corresponding relationship and the image pose information of the second target image comprises:
establishing a second corresponding relation between the grid surface and the second target image according to the first corresponding relation and the acquisition time corresponding to the first target image;
and projecting the second target image to a corresponding grid surface in the multi-dimensional grid model according to the second corresponding relation and the image posture information of the second target image.
5. The multispectral image processing method according to claim 1, wherein the step of constructing the multidimensional mesh model based on the plurality of frames of the first image comprises:
performing three-dimensional reconstruction processing on the first image to obtain a sparse point cloud;
generating dense three-dimensional point cloud corresponding to a scene according to the acquired sparse point cloud;
performing interpolation processing on the dense three-dimensional point cloud to obtain an elevation model;
and calculating the multi-dimensional grid model according to the elevation model.
6. A multispectral image processing device, comprising:
the construction module is used for constructing a multi-dimensional grid model based on the multi-frame first image; the first image is image data collected by a preselected channel in a plurality of wave band channels; the multi-dimensional mesh model comprises a plurality of mesh surfaces;
the evaluation module is used for calculating the cost value generated by projecting each frame of the first image to each grid surface according to the image posture information corresponding to the first image;
the evaluation module is further configured to screen out a first target image matched with the grid surface based on the cost value corresponding to each frame of the first image, and record a first corresponding relationship between the grid surface and the first target image;
the determining module is used for determining a second target image from a plurality of frames of second images acquired by other channels in the plurality of waveband channels according to the first target image, wherein the second target image and the first target image have the same acquisition time;
and the mapping module is used for projecting each second target image of the other channels to a corresponding grid surface in a multi-dimensional grid model according to the first corresponding relation and the image posture information of the second target image so as to obtain texture models corresponding to the other channels.
7. The multispectral image processing device according to claim 6, wherein the multispectral image processing device further comprises:
the mapping module is further configured to perform texture mapping processing on each grid surface of the multi-dimensional grid model by using the first corresponding relationship and the first target image to obtain a texture model corresponding to the preselected channel;
and the calculation module is used for calculating the corresponding normalized vegetation index based on the texture models corresponding to the channels of different wave bands.
8. The multispectral image processing device of claim 6, wherein the mapping module further comprises:
the obtaining submodule is used for obtaining a target grid surface corresponding to the first target image according to each first corresponding relation;
and the projection submodule is used for projecting the second target image corresponding to the first target image to the target grid surface based on the image posture information of the second target image.
9. The multispectral image processing device according to claim 6, wherein the mapping module is specifically configured to:
establishing a second corresponding relation between the grid surface and the second target image according to the first corresponding relation and the acquisition time corresponding to the first target image;
and projecting the second target image to a corresponding grid surface in the multi-dimensional grid model according to the second corresponding relation and the image posture information of the second target image.
10. The multispectral image processing device of claim 6, wherein the construction module is further configured to:
performing three-dimensional reconstruction processing on the first image to obtain a sparse point cloud;
generating dense three-dimensional point cloud corresponding to a scene according to the acquired sparse point cloud;
performing interpolation processing on the dense three-dimensional point cloud to obtain an elevation model;
and calculating the multi-dimensional grid model according to the elevation model.
11. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the method according to any one of claims 1 to 5.
12. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 5.
CN202011051731.0A 2020-09-29 2020-09-29 Multispectral image processing method and device and electronic equipment Active CN112053425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011051731.0A CN112053425B (en) 2020-09-29 2020-09-29 Multispectral image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011051731.0A CN112053425B (en) 2020-09-29 2020-09-29 Multispectral image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112053425A CN112053425A (en) 2020-12-08
CN112053425B true CN112053425B (en) 2022-05-10

Family

ID=73605629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011051731.0A Active CN112053425B (en) 2020-09-29 2020-09-29 Multispectral image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112053425B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103364781A (en) * 2012-04-11 2013-10-23 南京财经大学 Remote sensing data and geographical information system-based grainfield ground reference point screening method
CN104952075A (en) * 2015-06-16 2015-09-30 浙江大学 Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method
CN111629193A (en) * 2020-07-28 2020-09-04 江苏康云视觉科技有限公司 Live-action three-dimensional reconstruction method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10354752B4 (en) * 2002-11-25 2006-10-26 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method and device for the automatic equalization of single-channel or multi-channel images
US8619082B1 (en) * 2012-08-21 2013-12-31 Pelican Imaging Corporation Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation
US10410365B2 (en) * 2016-06-02 2019-09-10 Verily Life Sciences Llc System and method for 3D scene reconstruction with dual complementary pattern illumination
CN111366147A (en) * 2018-12-26 2020-07-03 阿里巴巴集团控股有限公司 Map generation method, indoor navigation method, device and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103364781A (en) * 2012-04-11 2013-10-23 南京财经大学 Remote sensing data and geographical information system-based grainfield ground reference point screening method
CN104952075A (en) * 2015-06-16 2015-09-30 浙江大学 Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method
CN111629193A (en) * 2020-07-28 2020-09-04 江苏康云视觉科技有限公司 Live-action three-dimensional reconstruction method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Coastal Dune Vegetation Mapping Using a Multispectral Sensor Mounted on an UAS;Chen Suo,etc;《Remote Sensing》;20190802;第11卷(第15期);1-19 *
乌兰乌拉湖地区遥感图像解译及三维可视化系统实现;张玲娟;《中国优秀硕士学位论文全文数据库(硕士)信息科技辑》;20140915;I140-560 *
基于多旋翼无人机的多光谱成像遥感系统开发及应用;殷文鑫;《中国优秀博硕士学位论文全文数据库(硕士)农业科技辑》;20180915;D044-22 *

Also Published As

Publication number Publication date
CN112053425A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
US11810272B2 (en) Image dehazing and restoration
CN108876926B (en) Navigation method and system in panoramic scene and AR/VR client equipment
US9454796B2 (en) Aligning ground based images and aerial imagery
Krig Computer vision metrics: Survey, taxonomy, and analysis
Barazzetti et al. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach
Li et al. Feature-preserving 3D mesh simplification for urban buildings
US20190197693A1 (en) Automated detection and trimming of an ambiguous contour of a document in an image
WO2019133922A1 (en) Point cloud denoising systems and methods
US20170278293A1 (en) Processing a Texture Atlas Using Manifold Neighbors
US8761506B1 (en) Pan sharpening digital imagery
CN107316286B (en) Method and device for synchronously synthesizing and removing rain and fog in image
CN114143528A (en) Multi-video stream fusion method, electronic device and storage medium
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN111640180A (en) Three-dimensional reconstruction method and device and terminal equipment
CN113436559B (en) Sand table dynamic landscape real-time display system and display method
CN115311434B (en) Tree three-dimensional reconstruction method and device based on oblique photography and laser data fusion
CN112270736A (en) Augmented reality processing method and device, storage medium and electronic equipment
WO2023093085A1 (en) Method and apparatus for reconstructing surface of object, and computer storage medium and computer program product
JP6943251B2 (en) Image processing equipment, image processing methods and computer-readable recording media
KR20200071565A (en) Apparatus and method for generating point cloud
JPWO2019026619A1 (en) Image processing apparatus, image processing method, and program
KR101021013B1 (en) A system for generating 3-dimensional geographical information using intensive filtering an edge of building object and digital elevation value
CN112053425B (en) Multispectral image processing method and device and electronic equipment
KR20180093727A (en) Automatically conversion system of GIS data
KR101809656B1 (en) System and method for detecting aquaculture farm facility based satellite image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510000 Block C, 115 Gaopu Road, Tianhe District, Guangzhou City, Guangdong Province

Applicant after: Guangzhou Jifei Technology Co.,Ltd.

Address before: 510000 Block C, 115 Gaopu Road, Tianhe District, Guangzhou City, Guangdong Province

Applicant before: Guangzhou Xaircraft Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant