CN112529781B - Image processing method, device and readable storage medium - Google Patents

Image processing method, device and readable storage medium Download PDF

Info

Publication number
CN112529781B
CN112529781B CN202110180839.8A CN202110180839A CN112529781B CN 112529781 B CN112529781 B CN 112529781B CN 202110180839 A CN202110180839 A CN 202110180839A CN 112529781 B CN112529781 B CN 112529781B
Authority
CN
China
Prior art keywords
image
coordinate
determining
function
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110180839.8A
Other languages
Chinese (zh)
Other versions
CN112529781A (en
Inventor
张恒
刘阳
孟铁军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quantaeye Beijing Technology Co ltd
Original Assignee
Quantaeye Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quantaeye Beijing Technology Co ltd filed Critical Quantaeye Beijing Technology Co ltd
Priority to CN202110180839.8A priority Critical patent/CN112529781B/en
Publication of CN112529781A publication Critical patent/CN112529781A/en
Application granted granted Critical
Publication of CN112529781B publication Critical patent/CN112529781B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The disclosure relates to an image processing method, an image processing device and a readable storage medium, wherein the method comprises the steps of determining a reference image corresponding to a preset spectral channel from a first spectral image with a plurality of spectral channels, wherein the first spectral image comprises a target to be adjusted; adjusting the target in the first image to be adjusted according to the first position of the target in the reference image to obtain an adjusted second image, wherein the first image comprises images corresponding to other spectral channels except the preset spectral channel in the first spectral image; and determining the processed second spectrum image according to the plurality of second images and the reference image. According to the image processing method disclosed by the embodiment of the disclosure, the accuracy of the three-dimensional object in the spectral image can be effectively improved.

Description

Image processing method, device and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and a readable storage medium.
Background
An imaging spectrometer is a sensor which combines a spectrum technology and an imaging technology and can simultaneously obtain spatial information and spectrum information. There are two common modes of operation of imaging spectrometers: push-broom type and swing-broom type.
After the imaging spectrometer finishes data acquisition, image splicing and spectrum alignment are required to be carried out, and a hyperspectral image is formed. When a three-dimensional object exists in a shooting scene, the three-dimensional object is positioned at different field angles of the camera, and different images can be formed. In a push-broom type or swing-broom type imaging spectrometer, different field angles of a camera correspond to different spectrum channels, and because the shapes and relative positions of three-dimensional objects in images of different spectrum channels are different, when the images of different spectrum channels are superposed together, pixel points on the three-dimensional objects in the images of different spectrum channels may have the condition that the pixel points cannot be accurately corresponding.
Disclosure of Invention
In view of the above, the present disclosure provides an image processing method, an image processing apparatus, and a readable storage medium to improve the accuracy of a stereo object in a spectral image.
According to an aspect of the present disclosure, there is provided an image processing method including: determining a reference image corresponding to a preset spectral channel from a first spectral image with a plurality of spectral channels, wherein the first spectral image comprises a target to be adjusted; adjusting the target in the first image to be adjusted according to the first position of the target in the reference image to obtain an adjusted second image, wherein the first image comprises images corresponding to other spectral channels except the preset spectral channel in the first spectral image; and determining a processed second spectrum image according to the plurality of second images and the reference image.
In a possible implementation manner, the adjusting the target in the first image to be adjusted according to the first position of the target in the reference image to obtain an adjusted second image includes: determining a first mapping relation between a first coordinate of the characteristic point and a second coordinate of the characteristic point in the first image; determining a second mapping relation between the reference image and the first image according to the first mapping relation of the plurality of feature points; and adjusting the pixel point corresponding to the target in the first image according to the second mapping relation to obtain the second image.
In one possible implementation, the method further includes: determining a first area to be adjusted in the first image according to coordinates of a plurality of feature points of the target in the first image; and determining the pixel points in the first region as the pixel points corresponding to the target.
In a possible implementation manner, the determining, according to coordinates of a plurality of feature points of the target in the first image, a first region to be adjusted in the first image includes: determining a second area defined by a plurality of characteristic points of the target according to the coordinates of the characteristic points in the first image, wherein the characteristic points are positioned on the boundary line of the second area; and according to the ratio of the second area to the first image, carrying out area expansion on the second area to obtain an expanded first area.
In a possible implementation manner, the determining a first mapping relationship between a first coordinate of the feature point and a second coordinate of the feature point in the first image includes: according to the first coordinate and the second coordinate, determining a first function of mapping the first coordinate to the abscissa of the second coordinate, and determining a second function of mapping the first coordinate to the ordinate of the second coordinate, wherein the first mapping relation comprises the first function and the second function.
In one possible implementation manner, determining a second mapping relationship between the reference image and the first image according to a first mapping relationship of a plurality of feature points includes: according to the first function and the second function, a third function between the coordinates of the pixel points of the first image and the abscissa of the pixel points of the reference image is determined, a fourth function between the coordinates of the pixel points of the first image and the ordinate of the pixel points of the reference image is determined, and the second mapping relation comprises the third function and the fourth function.
In one possible implementation manner, the first spectral image includes an image obtained by stitching and fusing a plurality of sequential images; the plurality of sequential images are acquired by a movable image acquisition device during movement.
In one possible implementation, the method further includes: acquiring a plurality of sequence images acquired by movable image acquisition equipment; the image acquisition equipment has camera parameters calibrated in advance; according to the camera parameters calibrated in advance, distortion elimination processing is carried out on the plurality of sequence images; determining the camera pose corresponding to each image subjected to distortion elimination according to the pre-calibrated camera parameters; determining world coordinates corresponding to the feature points in each image subjected to distortion removal according to the camera pose, the camera parameters and the pixel coordinates of the feature points in each image subjected to distortion removal; determining a splicing plane according to the world coordinates corresponding to the feature points in the image subjected to distortion elimination; determining the homography relation between each image subjected to distortion elimination processing and the splicing plane according to the camera pose and the camera parameters; mapping the images after the distortion elimination processing to the splicing plane according to the homography relation; and carrying out image fusion on the overlapped area of the images after the distortion elimination processing mapped to the splicing plane so as to obtain the first spectrum image.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: the reference image determining module is used for determining a reference image corresponding to a preset spectral channel from a first spectral image with a plurality of spectral channels, wherein the first spectral image comprises a target to be adjusted; the adjusting module is used for adjusting a target in a first image to be adjusted according to the position of the target in the reference image to obtain an adjusted second image, wherein the first image comprises images corresponding to other spectral channels except the preset spectral channel in the first spectral image; and the processing module is used for determining the processed second spectrum image according to the plurality of second images and the reference image.
In one possible implementation, the first position includes a first coordinate of a feature point of the target in the reference image, and the adjusting module includes: a first mapping relation determining unit, configured to determine a first mapping relation between a first coordinate of the feature point and a second coordinate of the feature point in the first image; a second mapping relation determining unit configured to determine a second mapping relation between the reference image and the first image according to a first mapping relation of a plurality of feature points; and the adjusting unit is used for adjusting the pixel points corresponding to the target in the first image according to the second mapping relation to obtain the second image.
In one possible implementation, the apparatus further includes: a first area determining module, configured to determine a first area to be adjusted in the first image according to coordinates of a plurality of feature points of the target in the first image; and the pixel point determining module is used for determining the pixel points in the first area as the pixel points corresponding to the target.
In one possible implementation manner, the first region determining module includes: a second area determining unit, configured to determine, according to coordinates of a plurality of feature points of the target in the first image, a second area defined by the plurality of feature points, where the plurality of feature points are located on a boundary line of the second area; and the expansion unit is used for performing region expansion on the second region according to the ratio of the second region to the first image to obtain an expanded first region.
In a possible implementation manner, the first mapping relationship determining unit includes: a first function determining subunit, configured to determine, according to the first coordinate and the second coordinate, a first function in which the first coordinate is mapped to an abscissa of the second coordinate; a second function determining subunit, configured to determine a second function in which the first coordinate is mapped to a ordinate of the second coordinate, where the first mapping relationship includes the first function and the second function.
In a possible implementation manner, the second mapping relationship determining unit includes: a third function determining subunit, configured to determine, according to the first function and the second function, a third function between coordinates of pixel points of the first image and abscissa of pixel points of the reference image; a fourth function determining subunit, configured to determine a fourth function between coordinates of pixel points of the first image and ordinate of pixel points of the reference image, where the second mapping relationship includes the third function and the fourth function.
In one possible implementation manner, the first spectral image includes an image obtained by stitching and fusing a plurality of sequential images; the plurality of sequence images comprise images acquired by a movable image acquisition device in a moving process according to a preset track.
In one possible implementation manner, the apparatus further includes: the acquisition module is used for acquiring a plurality of sequence images acquired by the movable image acquisition equipment; the image acquisition equipment has camera parameters calibrated in advance; the distortion elimination module is used for eliminating distortion of the plurality of sequence images according to the camera parameters calibrated in advance; the pose determining module is used for determining camera poses corresponding to the images after distortion elimination according to the camera parameters calibrated in advance; the world coordinate determination module is used for determining world coordinates corresponding to the feature points in the image subjected to distortion elimination according to the camera pose, the camera parameters and the pixel coordinates of the feature points in the image subjected to distortion elimination; the splicing plane determining module is used for determining a splicing plane according to the world coordinates; the homography relation determining module is used for determining the homography relation from each image subjected to distortion elimination processing to the splicing plane according to the camera pose and the camera parameters; the mapping module is used for mapping the images after the distortion elimination processing to the splicing plane according to the homography relation; and the splicing module is used for carrying out image fusion on the overlapped area of the images after the distortion elimination processing mapped to the splicing plane so as to obtain the first spectrum image.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a reference image corresponding to a preset spectral channel is determined from a first spectral image with a plurality of spectral channels; adjusting the target in the first image to be adjusted according to the first position of the target in the reference image to obtain an adjusted second image; according to the plurality of second images and the reference image, the processed second spectrum image is determined, the first images corresponding to other spectrum channels can be adjusted in position in the first image by taking the reference image as a reference, and therefore alignment of targets in the images corresponding to different spectrum channels can be achieved, and accuracy of the position of the target in the spectrum image is improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart for adjusting a target according to an embodiment of the present disclosure;
FIG. 3 illustrates a schematic view of a region defined by a plurality of feature points in accordance with an embodiment of the present disclosure;
FIG. 4 shows a schematic view of an expanded first region in accordance with an embodiment of the present disclosure;
FIG. 5 shows a flow chart of a method of generating a first spectral image according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 8 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in fig. 1, the image processing method includes:
step 11, determining a reference image corresponding to a preset spectral channel from a first spectral image with a plurality of spectral channels, wherein the first spectral image comprises a target to be adjusted;
step 12, adjusting the target in the first image to be adjusted according to the position of the target in the reference image to obtain an adjusted second image, wherein the first image comprises images corresponding to other spectral channels except the preset spectral channel in the first spectral image;
and step 13, determining the processed second spectrum image according to the plurality of second images and the reference image.
In one possible implementation, the image processing method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA) device, a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the image processing method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the image processing method may be performed by a server.
In one possible implementation manner, in step 11, the first spectral image with multiple spectral channels may be an image acquired by an image acquisition device with multiple spectral channels, for example, the image acquisition device may be a multispectral camera, an imaging spectrometer, a multispectral scanner, or the like, which is not limited by the embodiment of the present disclosure.
In one possible implementation, the first spectral image having a plurality of spectral channels may be an image obtained by fusing a plurality of sequential images acquired by the image acquisition device. The image processing method may further include: fusing a plurality of sequence images acquired by image acquisition equipment to obtain a first spectral image with a plurality of spectral channels; wherein the plurality of sequential images have a plurality of spectral channels.
In a possible implementation manner, in step 11, the target may be a stereoscopic object in the spectral image, and the target to be adjusted may be a stereoscopic object pre-calibrated in the first spectral image, for example, the stereoscopic object may be calibrated by manual selection, or may be a stereoscopic object recognized from the first spectral image by an artificial intelligence technology such as image recognition. The disclosed embodiments are not limited with respect to the selection of the target to be adjusted.
In a possible implementation manner, in step 11, the preset spectral channel may be one spectral channel of a plurality of preset spectral channels, for example, a spectral channel of a spectral band in a middle band among the plurality of spectral channels, or a spectral channel in a highest band, which is not limited in this disclosure. According to the preset spectral channel, an image corresponding to the preset spectral channel can be determined from the first spectral image having a plurality of spectral channels in step 11, so that the determined image corresponding to the preset spectral channel can be used as a reference image.
In one possible implementation, in step 12, the first image may include images corresponding to other spectral channels than the preset spectral channel in the first spectral image, for example, the reference image may be an image corresponding to a spectral channel in a middle wavelength band, and the first image to be adjusted may be an image corresponding to a spectral channel in another wavelength band than the spectral channel in the middle wavelength band.
In one possible implementation, in step 12, the first position of the target may include pixel coordinates of a pixel point of the stereoscopic object in the reference image. By taking the pixel coordinates of the target in the reference image as a reference, the pixel coordinates of the pixel points of the target in the first image can be adjusted in step 12, so that the alignment of the three-dimensional objects in the images corresponding to different spectral channels can be realized, and the adjusted second image can be obtained.
In one possible implementation, in step 13, the processed second spectral image may be obtained by superimposing the plurality of second images and the reference image in the spectral channel order. In step 12, alignment of the three-dimensional object in the images corresponding to different spectral channels is achieved, so that the three-dimensional object in the second spectral image obtained by superimposing the second image and the reference image can be accurate without shifting, and thus an accurate second spectral image is obtained.
In one possible implementation, for the superposition between the images corresponding to the multiple spectral channels, a person skilled in the art may implement the superposition by using an existing computer vision technology (for example, OpenCV technology), and the embodiment of the present disclosure is not limited thereto.
In the embodiment of the disclosure, a reference image corresponding to a preset spectral channel is determined from a first spectral image with a plurality of spectral channels; adjusting the target in the first image to be adjusted according to the first position of the target in the reference image to obtain an adjusted second image; according to the plurality of second images and the reference image, the processed second spectrum image is determined, the first images corresponding to other spectrum channels can be adjusted in position in the first image by taking the reference image as a reference, and therefore alignment of targets in the images corresponding to different spectrum channels can be achieved, and accuracy of the position of the target in the spectrum image is improved.
In one possible implementation, in step 12, to facilitate the adjustment of the stereoscopic object in the first image, the first position may include a first coordinate of the feature point of the target in the reference image. Through the first coordinates of the feature points, the mapping relation between the reference image and the image to be adjusted can be determined, and then the adjustment of the three-dimensional object in the first image can be realized according to the mapping relation.
FIG. 2 illustrates a flow chart for adjusting a target according to an embodiment of the disclosure. In a possible implementation manner, as shown in fig. 2, in step 12, adjusting the target in the first image to be adjusted according to the first position of the target in the reference image, to obtain an adjusted second image, may include:
step 121, determining a first mapping relation between a first coordinate of the feature point and a second coordinate of the feature point in the first image;
step 122, determining a second mapping relation between the reference image and the first image according to the first mapping relation of the plurality of feature points;
and step 123, adjusting the pixel points corresponding to the target in the first image according to the second mapping relation to obtain a second image.
In a possible implementation manner, the feature points of the target may be extracted by an existing feature point extraction algorithm. The feature point extraction algorithm may be, for example, a Features From Accelerated Segmentation Test (FAST) algorithm, an Oriented FAST and Rotated Brief (ORB) algorithm, and the like, and the embodiment of the present disclosure is not limited thereto.
In a possible implementation manner, as described above, the target to be adjusted in the first spectral image may be a pre-calibrated three-dimensional object, and after the three-dimensional object is calibrated, feature points of the three-dimensional object in images corresponding to different spectral channels may be extracted based on a feature point extraction algorithm. It is to be understood that the feature points of the solid object in the images corresponding to different spectral channels may correspond to each other, and the feature points in the image corresponding to each spectral channel may be multiple, for example, the feature points of the cube object in the image may include 8 vertices of the cube object, and then the feature points of the cube object in the image corresponding to each spectral channel are all the 8 vertices.
In one possible implementation manner, in step 121, determining a first mapping relationship between the first coordinate of the feature point and the second coordinate of the feature point in the first image may include:
according to the first coordinate and the second coordinate, a first function of mapping the first coordinate to the abscissa of the second coordinate is determined, a second function of mapping the first coordinate to the ordinate of the second coordinate is determined, and the first mapping relation comprises the first function and the second function.
In one possible implementation, the first function for determining the abscissa of the first coordinate mapped to the second coordinate may be, for example, a B-spline surface fitting the first coordinate (x, y) to the abscissa x 'of the second coordinate (x', y ') by interpolation, and then determining a corresponding first function according to the B-spline surface, where the first function may be a binary function with the first coordinate (x, y) as an argument and the abscissa x' of the second coordinate (x ', y') as a dependent variable
Figure DEST_PATH_IMAGE001
Similarly, the second function for determining the ordinate of the first coordinate mapped to the second coordinate may be, for example, a B-spline surface obtained by fitting the first coordinate (x, y) to the ordinate y 'of the second coordinate (x', y ') by interpolation, and the corresponding second function may be determined from the B-spline surface, where the first function may be a binary function having the first coordinate (x, y) as an argument and the ordinate y' of the second coordinate (x ', y') as a argument
Figure 638006DEST_PATH_IMAGE002
Then the first mapping relationship can be expressed as:
Figure DEST_PATH_IMAGE003
it should be noted that, although the way of determining the first function and the second function as above is described by taking a B-spline surface as an example, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, any type of curved surface can be fitted by interpolation as long as the mapping relationship between the horizontal and vertical coordinates of the first coordinate and the second coordinate can be reflected.
In the embodiment of the present disclosure, by determining the first function and the second function according to the feature points, it is convenient to derive the mapping relationship between the reference image and the first image to be adjusted according to the first function and the second function.
In one possible implementation manner, in step 122, determining a second mapping relationship between the reference image and the first image according to the first mapping relationship of the plurality of feature points includes:
according to the first function and the second function, a third function between the coordinates of the pixel points of the first image and the abscissa of the pixel points of the reference image is determined, a fourth function between the coordinates of the pixel points of the first image and the ordinate of the pixel points of the reference image is determined, and the second mapping relation comprises the third function and the fourth function.
In a possible implementation manner, after obtaining the first function and the second function, the third function and the fourth function may be derived through function transformation. The third function may be a binary function in which coordinates of pixel points of the first image are used as arguments and abscissa of pixel points of the reference image are used as arguments, and the fourth function may be a binary function in which coordinates of pixel points of the first image are used as arguments and ordinate of pixel points of the reference image is used as arguments.
For example, continuing with the example above, let the first mapping be represented as
Figure 718089DEST_PATH_IMAGE003
By function transformation, a third function can be obtained
Figure 385830DEST_PATH_IMAGE005
And a fourth function
Figure 648184DEST_PATH_IMAGE007
Then the second mapping relationship can be expressed as:
Figure 110390DEST_PATH_IMAGE008
in a possible implementation manner, in step 123, according to the second mapping relationship, the pixel point corresponding to the target in the first image is adjusted to obtain the second image, which may be the coordinate of the pixel point corresponding to the target in the first image
Figure 208927DEST_PATH_IMAGE010
The results (x and y values) obtained by the above third function and fourth function may be the coordinates of the adjusted pixel corresponding to the target. It can be understood that after all the pixel points corresponding to the target in the first image are adjusted, the adjusted second image can be obtained.
In the embodiment of the disclosure, the second mapping relationship between the reference image and the first image is further determined according to the first mapping relationship between the reference image and the feature points in the first image, so that the second mapping relationship between the images can be conveniently determined according to the first mapping relationship between the feature points, and then the first image is adjusted according to the second mapping relationship. The second mapping relation reflects the mapping relation from the first image to the reference image, so that when the pixel point of the target in the first image is adjusted, the coordinate of the pixel point corresponding to the target in the first image can be accurately adjusted by taking the reference image as the reference, and the adjusted second image is obtained.
In practical applications, the step of adjusting the target is actually adjusting pixel points of the target in the area of the first image, although the target in the first image may be manually defined, the defined area may not accurately contain the target, for example, the defined area may be too large or too small. In this case, the area where the target is located may be automatically determined.
In one possible implementation, the image processing method may further include:
determining a first area to be adjusted in the first image according to coordinates of a plurality of feature points of the target in the first image; and determining the pixel points in the first area as pixel points corresponding to the target.
In a possible implementation manner, the first area to be adjusted in the first image is determined according to coordinates of a plurality of feature points of the target in the first image, which may be an area defined by the plurality of feature points according to coordinates of the plurality of feature points in the first image, for example, a polygonal area defined by the plurality of feature points. Fig. 3 shows a schematic diagram of a region defined by a plurality of feature points according to an embodiment of the present disclosure, and a polygonal region as shown in fig. 3 may be a region defined by a plurality of feature points.
In the embodiment of the present disclosure, the first region is determined according to the feature point, so that the determined first region can accurately contain the target, and then the pixel point in the first region is determined as the pixel point corresponding to the target, so that the adjustment of the pixel point in the first region can be realized.
In practical applications, it is considered that, because a solid object does not have enough feature points extracted from each edge, regions defined by a plurality of feature points may not completely surround the solid object, that is, the regions defined by the plurality of feature points may include partial pixel points of the solid object. In this case, the area defined by the plurality of feature points may be expanded.
In a possible implementation manner, the determining, according to coordinates of a plurality of feature points of the target in the first image, a first region to be adjusted in the first image includes:
determining a second area defined by a plurality of characteristic points according to the coordinates of the characteristic points of the target in the first image, wherein the characteristic points are positioned on the boundary line of the second area;
and according to the ratio of the second area to the first image, carrying out area expansion on the second area to obtain an expanded first area.
In one possible implementation, according to the coordinates of the plurality of feature points in the first image, a region (referred to as a second region) defined by the plurality of feature points may be determined, and each feature point is located on a boundary line of the second region, as shown in fig. 3.
In one possible implementation, the second region may be expanded according to a ratio between the second region and the first image. The ratio between the second region and the first image may be a ratio between the number of pixels in the second region and the number of pixels in the first image, a ratio between the maximum width of the second region and the width of the first image, a ratio between the maximum length of the second region and the length of the first image, or the like.
It should be noted that, although the manner of the ratio between the second region and the first image is described above by taking the number of pixels, the width and the length as examples, it can be understood by those skilled in the art that the present disclosure should not be limited thereto. In fact, the user can flexibly set the ratio between the second region and the first image according to the actual application scene, as long as the ratio of the second region to the first image can be reflected.
In a possible implementation manner, the second region is subjected to region expansion according to a ratio between the second region and the first image, so as to obtain an expanded first region. If the ratio of the number of pixels in the second region to the pixel data of the first image is 30%, the second region may be expanded to 1.3 times the second region, that is, the second region is expanded to 30% of the second region, and the second region of 1.3 times may be the expanded first region. Fig. 4 shows a schematic diagram of an expanded first region, which is larger than the second region, as shown in fig. 4, according to an embodiment of the present disclosure.
In the embodiment of the disclosure, the first region is obtained by expanding the second region, so that the number of pixels contained in the expanded first region is greater than that of the second region, and thus the three-dimensional object can be completely contained in the first region, and the accuracy of adjusting the target in the first image can be improved.
In a possible implementation manner, considering that when it is desired to acquire an image in a larger area, the image is limited by the field angle of the image acquisition device, multiple images may be continuously captured, and then the multiple images are spliced and fused to obtain a panoramic image, so that the first spectral image may include an image obtained by splicing and fusing multiple sequence images; wherein the plurality of sequence images are acquired by a movable image acquisition device during movement.
In a possible implementation manner, for a movable image capturing device, under the condition that a movement track of the image capturing device is preset, a plurality of sequence images may be the movable image capturing device and are captured in the moving process according to the preset movement track; under the condition that the movement track of the image acquisition device is not preset, the plurality of sequence images may be movable image acquisition devices, and are acquired in the movement process according to a random movement track, which is not limited in the embodiment of the present disclosure.
The method comprises the steps of acquiring a plurality of sequence images according to a preset moving track, wherein the plurality of acquired sequence images can be shot at a certain height and a certain speed, and a better image fusion effect can be obtained when the plurality of sequence images are subjected to image fusion processing, so that the image definition after fusion is higher, and the edge sharpness is higher.
In a possible implementation manner, the preset trajectory may be a moving trajectory of the image capturing device when capturing the image in the target area, which is set according to actual requirements.
Fig. 5 shows a flow chart of a method of generating a first spectral image according to an embodiment of the present disclosure. In a possible implementation manner, in the case that the first spectrum image is an image obtained by stitching and fusing a plurality of sequential images, as shown in fig. 5, the method may include:
step 01, acquiring a plurality of sequence images acquired by movable image acquisition equipment; the image acquisition equipment has camera parameters calibrated in advance;
step 02, distortion elimination processing is carried out on a plurality of sequence images according to camera parameters calibrated in advance;
step 03, determining camera poses corresponding to the images subjected to distortion elimination respectively according to camera parameters calibrated in advance;
step 04, determining world coordinates corresponding to the feature points in each distortion-removed image according to the camera pose, the camera parameters and the pixel coordinates of the feature points in each distortion-removed image;
step 05, determining a splicing plane according to world coordinates corresponding to the feature points in each image subjected to distortion elimination;
step 06, determining the homography relation between each image subjected to distortion elimination processing and the splicing plane according to the camera pose and the camera parameters;
step 07, mapping each image after distortion elimination processing to a splicing plane according to a homography relation;
and step 08, carrying out image fusion on the overlapped area of the images subjected to distortion elimination processing and mapped to the splicing plane to obtain a first spectrum image.
In one possible implementation, in step 01, the camera parameters may include camera intrinsic parameters and distortion parameters. The camera parameters calibrated in advance may be parameters obtained by calibrating the image acquisition device by using an existing camera calibration method. For example, a method for calibrating the camera by using a calibration template based on a checkerboard, a dot-matrix diagram, or the like may be adopted, and how to calibrate the image acquisition device is not limited in the embodiment of the present disclosure.
In a possible implementation manner, based on an image obtained by shooting a calibration template in multiple angles by an image acquisition device, a corresponding relation between image plane coordinates of feature points in the shot image and world coordinates can be determined, and then camera parameters and distortion parameters can be determined according to the corresponding relation, so that camera parameters calibrated in advance are obtained.
In one possible implementation, the plurality of sequential images may be image data within a target region acquired by the image acquisition device according to a preset trajectory. After the plurality of sequence images are acquired, corresponding sequence numbers can be added to the plurality of acquired sequence images so as to splice the plurality of sequence images in sequence.
In a possible implementation manner, in step 02, the distortion removal processing is performed on the plurality of sequence images according to the camera parameters calibrated in advance, which may be the distortion removal processing is performed on the plurality of sequence images according to the calibrated distortion parameters.
In a possible implementation manner, in step 03, the camera poses corresponding to the respective images after the distortion removal processing are determined according to pre-calibrated camera parameters, and may be determined according to calibrated camera parameters based on a Simultaneous Localization and Mapping (SLAM) algorithm. Of course, other camera pose determination methods may also be adopted, for example, a Semi-Direct Visual odometer (SVO) algorithm may also be adopted to determine the camera pose, which is not limited in this embodiment of the present disclosure. Wherein the camera pose comprises a translation matrix and a rotation matrix.
In one possible implementation manner, in step 04, the pixel coordinates of the feature point refer to two-dimensional coordinates of the feature point in an image plane coordinate system, and the world coordinates refer to three-dimensional coordinates of the feature point in a world coordinate system.
In a possible implementation manner, in step 04, a corresponding relationship between the pixel coordinates and the world coordinates can be determined according to a translation matrix and a rotation matrix included in the camera pose and camera parameters included in the camera parameters, and then the pixel coordinates of the feature points are brought into the corresponding relationship, so that the world coordinates corresponding to the feature points can be obtained.
Wherein the correspondence may be based on a pinhole imaging model, for example
Figure 414780DEST_PATH_IMAGE011
Established, where s is an arbitrary scaling factor,
Figure 531641DEST_PATH_IMAGE013
Figure 430327DEST_PATH_IMAGE014
pixel coordinate (C)u,v) World coordinates (1)X,Y,Z) K represents a camera reference matrix, R represents a rotation matrix, and t represents a translation matrix, and] T representing a matrix transposition.
In a possible implementation manner, in step 05, a stitching plane is determined according to world coordinates corresponding to the feature points in each undistorted image, which may be based on RANdom SAmple Consensus (RANSAC) algorithm, and the world coordinates (x) corresponding to the feature points are determined according to the world coordinates corresponding to the feature points in each undistorted image0,y0,z0) Distance D (to) plane equation (ax + by + cz + D = 0)
Figure 16160DEST_PATH_IMAGE015
) The minimum is the plane fitted by the target as the splicing plane, and a, b, c and d are parameters of the plane equation.
In a possible implementation manner, in step 05, a stitching plane is determined according to world coordinates corresponding to feature points in each image after the distortion removal processing, or an average height corresponding to coordinates in a direction perpendicular to the ground is determined according to world coordinates corresponding to a plurality of feature points, and a plane fitted under the average height is taken as the stitching plane.
It should be noted that, although the way of determining the splicing plane as above is described by taking the RANSAC algorithm and the average height as examples, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set the mode of fitting the splicing plane according to the actual application scene, and only needs to determine the splicing plane according to the world coordinate.
In a possible implementation manner, in step 06, according to the camera pose and the camera parameters, the homography relationship between each of the distortion-removed images and the stitching plane is determined, which may be according to the formula
Figure 760125DEST_PATH_IMAGE017
Determining the homography relation between each image after distortion elimination processing and a splicing plane, wherein splicingExpressed as a plane
Figure 997071DEST_PATH_IMAGE019
K represents a camera reference matrix, R represents a rotation matrix, t represents a translation matrix,
Figure 801079DEST_PATH_IMAGE020
representing the transpose of the normal vector n = (a, b, c), P represents the position of the point on the stitching plane,
Figure 405367DEST_PATH_IMAGE021
representing homography of homography relations
Figure 828389DEST_PATH_IMAGE022
In one possible implementation, in step 07, the undistorted images are mapped to the stitching plane according to a homography relationship, which may be according to a homography matrix
Figure DEST_PATH_IMAGE023
Determining projective transformation relationship between the image and the splicing plane, mapping each image after distortion elimination processing to the splicing plane, wherein the projective transformation relationship can be expressed as
Figure 388684DEST_PATH_IMAGE024
Where x and y represent the pixel coordinates of the image before transformation, scr represents the image before transformation, and dst represents the image after transformation.
In a possible implementation manner, in step 08, image fusion is performed on the overlapped regions of the distortion-removed images mapped to the stitching plane, which may use an existing image fusion technique, for example, an Alpha fusion algorithm may be used to process the image overlapped regions, and a weighted average algorithm or a wavelet transform algorithm may also be used.
In the embodiment of the disclosure, image splicing is performed based on the projective transformation model, images are transformed to the same plane for splicing in the splicing process, and then the three-dimensional objects on the images corresponding to different spectral bands can be adjusted and aligned, so that more accurate hyperspectral images are obtained.
In a possible implementation manner, when a push-broom type or swing-broom type imaging spectrum system is used to shoot a 3D object, the imaging spectrum system used may include a spectrum camera with multiple spectrum channels, a controller, and a push-broom platform, the spectrum camera includes a lens and an area array detector, and the method for stitching images shot by the imaging spectrum system may include the following steps.
Firstly, shooting a target scene by using an imaging spectrum system, wherein the acquisition frame rate can be set according to the resolution, the height and the width of a flight band;
further, preprocessing the image, and carrying out distortion elimination processing on the image according to camera calibration parameters;
furthermore, the position of each frame of picture is positioned by using an SLAM algorithm based on ORB characteristics to obtain the pose and the key point cloud of each frame of picture;
furthermore, a spliced plane is obtained, in an actual situation, most objects in the scene are located on the same plane, so that points in the point cloud are approximately located on the same plane, a plane is optimized by using a RANSAC algorithm, corresponds to the plane of the scene, and is used as the spliced plane;
furthermore, homography transformation from the reference frame to the splicing plane can be obtained by matching points on the reference frame and the splicing plane, so that homography transformation relation between each position of the unmanned aerial vehicle and the splicing plane can be obtained, and image registration is completed;
furthermore, the registration is completed after the projection transformation relation calculation of each picture in the previous step is completed, and then Alpha fusion splicing is carried out, so that the influence caused by small flaws and dead spots on the surface of the sensor can be eliminated;
furthermore, the spliced images of all the channels are overlapped together to form a hyperspectral image, feature point matching is carried out on the 3D object part in different images, the image of one spectral channel is selected as a front view, and the images of other channels are locally stretched by taking the front view as a reference, so that the correct hyperspectral image on the 3D object is obtained.
It should be noted that the imaging spectroscopy system in the embodiment of the present disclosure may be applicable to a push-broom type imaging spectrometer with a narrow field of view, and a carrier of the imaging spectrometer is not necessarily an unmanned aerial vehicle, and may also be a translation stage, or adopt a handheld manner; the image can be collected not only by adopting an imaging spectrometer, but also by adopting a common camera in a push-broom mode.
In the embodiment of the disclosure, the spliced plane obtained by optimizing the point cloud of the key points is used, so that the spliced image is in front view, and the projective deformation is reduced; selecting a spliced image corresponding to a proper spectral band as a front view, and performing local adjustment on the 3D object part on other spectral bands to enable spectral information on the 3D object to be more accurate; the defects of high requirements on the accuracy of a Position and Orientation System (POS), more photographic deformation of a spliced image and more splicing traces of the conventional image splicing method can be overcome, high-accuracy Global Positioning System (GPS) and Inertial navigation Unit (IMU) equipment are not required, and the application range of the imaging spectrometer can be expanded.
Fig. 6 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 6, the image processing apparatus includes:
a reference image determining module 101, configured to determine a reference image corresponding to a preset spectral channel from a first spectral image with multiple spectral channels, where the first spectral image includes a target to be adjusted;
an adjusting module 102, configured to adjust a target in a first image to be adjusted according to a position of the target in the reference image, to obtain an adjusted second image, where the first image includes images corresponding to other spectral channels in the first spectral image except for the preset spectral channel;
and the processing module 103 is configured to determine a processed second spectral image according to the plurality of second images and the reference image.
In one possible implementation, as described above, the first spectral image having a plurality of spectral channels may be an image obtained by fusing a plurality of sequential images acquired by the image acquisition device. The image processing apparatus may further include: the image fusion module is used for fusing a plurality of sequence images acquired by the image acquisition equipment to obtain a first spectrum image with a plurality of spectrum channels; wherein the plurality of sequential images have a plurality of spectral channels.
In a possible implementation manner, the first position includes a first coordinate of the feature point of the target in the reference image, and the adjusting module 102 includes:
a first mapping relation determining unit, configured to determine a first mapping relation between a first coordinate of the feature point and a second coordinate of the feature point in the first image;
a second mapping relation determining unit configured to determine a second mapping relation between the reference image and the first image according to a first mapping relation of a plurality of feature points;
and the adjusting unit is used for adjusting the pixel points corresponding to the target in the first image according to the second mapping relation to obtain the second image.
In one possible implementation, the apparatus further includes:
a first area determining module, configured to determine a first area to be adjusted in the first image according to coordinates of a plurality of feature points of the target in the first image;
and the pixel point determining module is used for determining the pixel points in the first area as the pixel points corresponding to the target.
In one possible implementation manner, the first region determining module includes:
a second area determining unit, configured to determine, according to coordinates of a plurality of feature points of the target in the first image, a second area defined by the plurality of feature points, where the plurality of feature points are located on a boundary line of the second area;
and the expansion unit is used for performing region expansion on the second region according to the ratio of the second region to the first image to obtain an expanded first region.
In a possible implementation manner, the first mapping relationship determining unit includes:
a first function determining subunit, configured to determine, according to the first coordinate and the second coordinate, a first function in which the first coordinate is mapped to an abscissa of the second coordinate;
a second function determining subunit, configured to determine a second function in which the first coordinate is mapped to a ordinate of the second coordinate, where the first mapping relationship includes the first function and the second function.
In a possible implementation manner, the second mapping relationship determining unit includes:
a third function determining subunit, configured to determine, according to the first function and the second function, a third function between coordinates of pixel points of the first image and abscissa of pixel points of the reference image;
a fourth function determining subunit, configured to determine a fourth function between coordinates of pixel points of the first image and ordinate of pixel points of the reference image, where the second mapping relationship includes the third function and the fourth function.
In one possible implementation manner, the first spectral image includes an image obtained by stitching and fusing a plurality of sequential images; the plurality of sequence images comprise images acquired by a movable image acquisition device in the process of moving according to a preset track.
In one possible implementation manner, the apparatus further includes:
the acquisition module is used for acquiring a plurality of sequence images acquired by the movable image acquisition equipment; the image acquisition equipment has camera parameters calibrated in advance;
the distortion elimination module is used for eliminating distortion of the plurality of sequence images according to the camera parameters calibrated in advance;
the pose determining module is used for determining camera poses corresponding to the images after distortion elimination according to the camera parameters calibrated in advance;
the world coordinate determination module is used for determining world coordinates corresponding to the feature points in the image subjected to distortion elimination according to the camera pose, the camera parameters and the pixel coordinates of the feature points in the image subjected to distortion elimination;
the splicing plane determining module is used for determining a splicing plane according to the world coordinates;
the homography relation determining module is used for determining the homography relation from each image subjected to distortion elimination processing to the splicing plane according to the camera pose and the camera parameters;
the mapping module is used for mapping the images after the distortion elimination processing to the splicing plane according to the homography relation;
and the splicing module is used for carrying out image fusion on the overlapped area of the images after the distortion elimination processing mapped to the splicing plane so as to obtain the first spectrum image.
In the embodiment of the disclosure, a reference image corresponding to a preset spectral channel is determined from a first spectral image with a plurality of spectral channels, a target in the first image to be adjusted is adjusted according to a first position of the target in the reference image to obtain an adjusted second image, and the second spectral image after processing is determined according to the plurality of second images and the reference image, so that the first images corresponding to other spectral channels can be adjusted in position of the target in the first image according to the reference, and alignment of the targets in the images corresponding to different spectral channels can be realized, and further accuracy of the position of the target in the spectral image can be improved.
Fig. 7 is a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 7, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 8 shows a block diagram of an image processing apparatus 1900 according to an embodiment of the present disclosure. For example, the apparatus 1900 may be provided as a server. Referring to fig. 8, the image processing device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (8)

1. An image processing method, comprising:
determining a reference image corresponding to a preset spectral channel from a first spectral image with a plurality of spectral channels, wherein the first spectral image comprises a target to be adjusted;
adjusting the target in the first image to be adjusted according to the first position of the target in the reference image to obtain an adjusted second image, wherein the first image comprises images corresponding to other spectral channels except the preset spectral channel in the first spectral image;
determining a processed second spectral image according to the plurality of second images and the reference image;
the first position comprises a first coordinate of a feature point of the object in the reference image,
the adjusting the target in the first image to be adjusted according to the first position of the target in the reference image to obtain an adjusted second image includes:
determining a first mapping relationship between a first coordinate of the feature point and a second coordinate of the feature point in the first image, the first mapping relationship comprising a first function of the first coordinate mapped to an abscissa of the second coordinate and a second function of the first coordinate mapped to an ordinate of the second coordinate;
obtaining the first function by interpolating and fitting a B-spline surface from the first coordinate to the abscissa of the second coordinate;
fitting a B-spline surface from the first coordinate to a vertical coordinate of the second coordinate through interpolation to obtain the second function;
determining a second mapping relation between the reference image and the first image according to a first mapping relation of a plurality of feature points, wherein the second mapping relation is obtained by performing function transformation on the first function and the second function;
adjusting pixel points corresponding to the target in the first image according to the second mapping relation to obtain a second image;
wherein the method further comprises:
determining a first area to be adjusted in the first image according to coordinates of a plurality of feature points of the target in the first image;
determining pixel points in the first region as pixel points corresponding to the target;
wherein the determining a first region to be adjusted in the first image according to the coordinates of the plurality of feature points of the target in the first image includes:
determining a second area defined by a plurality of characteristic points of the target according to the coordinates of the characteristic points in the first image, wherein the characteristic points are positioned on the boundary line of the second area;
and according to the ratio of the second area to the first image, carrying out area expansion on the second area to obtain an expanded first area.
2. The method of claim 1, wherein determining a first mapping relationship between a first coordinate of the feature point and a second coordinate of the feature point in the first image comprises:
determining a first function of the first coordinate mapped to the abscissa of the second coordinate from the first coordinate and the second coordinate,
and determining a second function of the ordinate of the first coordinate mapped to the second coordinate,
the first mapping relationship includes the first function and the second function.
3. The method of claim 2, wherein determining a second mapping relationship between the reference image and the first image from the first mapping relationship of the plurality of feature points comprises:
determining a third function between the coordinates of the pixel points of the first image and the abscissa of the pixel points of the reference image according to the first function and the second function,
and determining a fourth function between the coordinates of the pixels of the first image and the ordinate of the pixels of the reference image,
the second mapping relationship includes the third function and the fourth function.
4. The method according to any one of claims 1 to 3, wherein the first spectral image comprises an image resulting from stitching and fusing a plurality of sequential images; the plurality of sequential images are acquired by a movable image acquisition device during movement.
5. The method according to any one of claims 1-3, further comprising:
acquiring a plurality of sequence images acquired by movable image acquisition equipment; the image acquisition equipment has camera parameters calibrated in advance;
according to the camera parameters calibrated in advance, distortion elimination processing is carried out on the plurality of sequence images;
determining the camera pose corresponding to each image subjected to distortion elimination according to the pre-calibrated camera parameters;
determining world coordinates corresponding to the feature points in each image subjected to distortion removal according to the camera pose, the camera parameters and the pixel coordinates of the feature points in each image subjected to distortion removal;
determining a splicing plane according to the world coordinates corresponding to the feature points in the image subjected to distortion elimination;
determining the homography relation between each image subjected to distortion elimination processing and the splicing plane according to the camera pose and the camera parameters;
mapping the images after the distortion elimination processing to the splicing plane according to the homography relation;
and carrying out image fusion on the overlapped area of the images after the distortion elimination processing mapped to the splicing plane so as to obtain the first spectrum image.
6. An image processing apparatus characterized by comprising:
the reference image determining module is used for determining a reference image corresponding to a preset spectral channel from a first spectral image with a plurality of spectral channels, wherein the first spectral image comprises a target to be adjusted;
the adjusting module is used for adjusting the target in the first image to be adjusted according to the first position of the target in the reference image to obtain an adjusted second image, wherein the first image comprises images corresponding to other spectral channels except the preset spectral channel in the first spectral image;
the processing module is used for determining a processed second spectrum image according to the plurality of second images and the reference image;
the first location comprises a first coordinate of a feature point of the target in the reference image, and the adjustment module comprises:
a first mapping relation determining unit, configured to determine a first mapping relation between a first coordinate of the feature point and a second coordinate of the feature point in the first image, where the first mapping relation includes a first function in which the first coordinate is mapped to an abscissa of the second coordinate and a second function in which the first coordinate is mapped to an ordinate of the second coordinate, obtain the first function by interpolating and fitting a B-spline surface from the first coordinate to the abscissa of the second coordinate, and obtain the second function by interpolating and fitting a B-spline surface from the first coordinate to the ordinate of the second coordinate;
a second mapping relationship determining unit, configured to determine, according to a first mapping relationship of a plurality of feature points, a second mapping relationship between the reference image and the first image, where the second mapping relationship is obtained by performing function transformation on the first function and the second function;
the adjusting unit is used for adjusting the pixel points corresponding to the target in the first image according to the second mapping relation to obtain a second image;
wherein the apparatus further comprises:
a first area determining module, configured to determine a first area to be adjusted in the first image according to coordinates of a plurality of feature points of the target in the first image;
the pixel point determining module is used for determining the pixel points in the first area as pixel points corresponding to the target;
wherein the first region determining module comprises:
a second area determining unit, configured to determine, according to coordinates of a plurality of feature points of the target in the first image, a second area defined by the plurality of feature points, where the plurality of feature points are located on a boundary line of the second area;
and the expansion unit is used for performing region expansion on the second region according to the ratio of the second region to the first image to obtain an expanded first region.
7. An image processing apparatus characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 5.
8. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 5.
CN202110180839.8A 2021-02-10 2021-02-10 Image processing method, device and readable storage medium Active CN112529781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110180839.8A CN112529781B (en) 2021-02-10 2021-02-10 Image processing method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110180839.8A CN112529781B (en) 2021-02-10 2021-02-10 Image processing method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN112529781A CN112529781A (en) 2021-03-19
CN112529781B true CN112529781B (en) 2021-06-22

Family

ID=74975731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110180839.8A Active CN112529781B (en) 2021-02-10 2021-02-10 Image processing method, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112529781B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023149963A1 (en) 2022-02-01 2023-08-10 Landscan Llc Systems and methods for multispectral landscape mapping

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751673A (en) * 2009-12-24 2010-06-23 中国资源卫星应用中心 Multi-spectral image registration detection and correction method based on phase coincident characteristic
CN103390272A (en) * 2013-07-16 2013-11-13 西安应用光学研究所 Method for achieving registration and fusion of multi-spectral pseudo color images
CN110650291A (en) * 2019-10-23 2020-01-03 Oppo广东移动通信有限公司 Target focus tracking method and device, electronic equipment and computer readable storage medium
CN111681271A (en) * 2020-08-11 2020-09-18 湖南大学 Multichannel multispectral camera registration method, system and medium
CN111798373A (en) * 2020-06-11 2020-10-20 西安视野慧图智能科技有限公司 Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8897571B1 (en) * 2011-03-31 2014-11-25 Raytheon Company Detection of targets from hyperspectral imagery

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751673A (en) * 2009-12-24 2010-06-23 中国资源卫星应用中心 Multi-spectral image registration detection and correction method based on phase coincident characteristic
CN103390272A (en) * 2013-07-16 2013-11-13 西安应用光学研究所 Method for achieving registration and fusion of multi-spectral pseudo color images
CN110650291A (en) * 2019-10-23 2020-01-03 Oppo广东移动通信有限公司 Target focus tracking method and device, electronic equipment and computer readable storage medium
CN111798373A (en) * 2020-06-11 2020-10-20 西安视野慧图智能科技有限公司 Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization
CN111681271A (en) * 2020-08-11 2020-09-18 湖南大学 Multichannel multispectral camera registration method, system and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无人机低空遥感多光谱图像预处理与拼接技术研究;李艺健;《中国优秀硕士学位论文全文数据库 农业科技辑》;20200215(第2期);第25-29,53-81页 *

Also Published As

Publication number Publication date
CN112529781A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN110674719B (en) Target object matching method and device, electronic equipment and storage medium
CN106846410B (en) Driving environment imaging method and device based on three dimensions
EP3511864A1 (en) Method and apparatus for synthesizing virtual and real objects
CN111339846A (en) Image recognition method and device, electronic equipment and storage medium
CN110503689B (en) Pose prediction method, model training method and model training device
CN109584362B (en) Three-dimensional model construction method and device, electronic equipment and storage medium
WO2023103377A1 (en) Calibration method and apparatus, electronic device, storage medium, and computer program product
CN109377446B (en) Face image processing method and device, electronic equipment and storage medium
CN114088062B (en) Target positioning method and device, electronic equipment and storage medium
CN112184787A (en) Image registration method and device, electronic equipment and storage medium
JP7316456B2 (en) POINT CLOUD MAP CONSTRUCTION METHOD AND DEVICE, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM
CN111860373B (en) Target detection method and device, electronic equipment and storage medium
CN112365406B (en) Image processing method, device and readable storage medium
CN112767288A (en) Image processing method and device, electronic equipment and storage medium
KR102366995B1 (en) Method and apparatus for training image processing model, and storage medium
CN110874809A (en) Image processing method and device, electronic equipment and storage medium
CN112529781B (en) Image processing method, device and readable storage medium
CN113345000A (en) Depth detection method and device, electronic equipment and storage medium
CN113936154A (en) Image processing method and device, electronic equipment and storage medium
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
CN113344999A (en) Depth detection method and device, electronic equipment and storage medium
CN113066135A (en) Calibration method and device of image acquisition equipment, electronic equipment and storage medium
CN113012052B (en) Image processing method and device, electronic equipment and storage medium
CN113888645A (en) Driving equipment, computer vision processing method and device and electronic equipment
CN112116530B (en) Fisheye image distortion correction method, device and virtual display system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant