CN111598777A - Sky cloud image processing method, computer device and readable storage medium - Google Patents

Sky cloud image processing method, computer device and readable storage medium Download PDF

Info

Publication number
CN111598777A
CN111598777A CN202010402326.2A CN202010402326A CN111598777A CN 111598777 A CN111598777 A CN 111598777A CN 202010402326 A CN202010402326 A CN 202010402326A CN 111598777 A CN111598777 A CN 111598777A
Authority
CN
China
Prior art keywords
sky
image
value
pixel point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010402326.2A
Other languages
Chinese (zh)
Inventor
周康明
王庆峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202010402326.2A priority Critical patent/CN111598777A/en
Publication of CN111598777A publication Critical patent/CN111598777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method for processing a sky cloud picture, a computer device and a readable storage medium, which can acquire a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles; carrying out image splicing processing on a plurality of sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image; and carrying out distortion removal treatment on the sky panoramic image to obtain a distorted sky image. According to the method, the obtained sky panoramic image is more fit to the actual existing all-sky scene through splicing conversion, the obtained sky image is more accurate through distortion removal processing, and subsequent cloud amount calculation and cloud shape estimation are carried out by using the distorted sky image, so that the accuracy of an estimation result can be greatly improved; the accuracy of future weather prediction is further improved.

Description

Sky cloud image processing method, computer device and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, a computer device, and a readable storage medium for processing a sky cloud image.
Background
The observation of clouds, which is the number of clouds covering the sky, and the observation of cloud direction, cloud speed, photoelectric phenomenon, etc. are considered. The method has the advantages that the cloud forms are accurately classified, the cloud amount occupation ratio is estimated, and the method plays an important role in analyzing future weather changes and obtaining better economic benefits.
With the application and development of deep learning, cloud classification and cloud amount estimation are researched by combining with a deep learning technology. In the conventional technology, an obtained sky cloud image is input into a network model based on deep learning after being subjected to simple preprocessing (such as filtering and denoising), so as to obtain a cloud classification and cloud amount estimation result.
However, in the conventional technology, cloud classification and cloud amount evaluation are performed by using a simply preprocessed sky cloud image, which has the problem of low accuracy.
Disclosure of Invention
Based on this, it is necessary to provide a sky cloud map processing method, a computer device and a readable storage medium for solving the problem of low accuracy of cloud classification and cloud amount evaluation in the conventional technology.
A method of processing a sky cloud map, the method comprising:
acquiring a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles;
carrying out image splicing processing on a plurality of sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image;
and carrying out distortion removal treatment on the sky panoramic image to obtain a distorted sky image.
A device for processing a sky cloud map, the device comprising:
the acquisition module is used for acquiring a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles;
the image splicing module is used for carrying out image splicing processing on a plurality of sky cloud pictures to obtain a sky panoramic image; the sky panoramic image is a spherical image;
and the distortion removal module is used for carrying out distortion removal processing on the sky panoramic image to obtain a distorted sky image.
A computer device comprising a memory and a processor, the memory storing a computer program that when executed by the processor performs the steps of:
acquiring a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles;
carrying out image splicing processing on a plurality of sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image;
and carrying out distortion removal treatment on the sky panoramic image to obtain a distorted sky image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles;
carrying out image splicing processing on a plurality of sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image;
and carrying out distortion removal treatment on the sky panoramic image to obtain a distorted sky image.
The sky cloud picture processing method, the sky cloud picture processing device, the computer equipment and the readable storage medium can acquire a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles; carrying out image splicing processing on a plurality of sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image; and carrying out distortion removal treatment on the sky panoramic image to obtain a distorted sky image. According to the method, the obtained sky panoramic image is more fit to the actual existing all-sky scene through splicing conversion, the obtained sky image is more accurate through distortion removal processing, and subsequent cloud amount calculation and cloud shape estimation are carried out by using the distorted sky image, so that the accuracy of an estimation result can be greatly improved; the accuracy of future weather prediction is further improved.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flow diagram illustrating a method for processing sky cloud images in one embodiment;
FIG. 3 is a schematic flow chart diagram illustrating a method for processing sky cloud images in accordance with another embodiment;
FIG. 4 is a schematic flow chart diagram illustrating a method for processing sky cloud images in accordance with yet another embodiment;
FIG. 5 is a flow chart illustrating a method for processing sky cloud images in yet another embodiment;
FIG. 6 is a flow chart illustrating a method for processing sky cloud images in yet another embodiment;
FIG. 7 is a flowchart illustrating a method for processing sky cloud images in accordance with yet another embodiment;
FIG. 8 is a flow chart illustrating a method for processing sky cloud images in accordance with yet another embodiment;
FIG. 9 is a block diagram of a device for processing sky cloud images in an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for processing the sky cloud image provided by the embodiment of the application can be applied to the computer device shown in fig. 1. The computer device comprises a processor and a memory connected by a system bus, wherein a computer program is stored in the memory, and the steps of the method embodiments described below can be executed when the processor executes the computer program. Optionally, the computer device may further comprise a communication interface, a display screen and an input means. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a nonvolatile storage medium storing an operating system and a computer program, and an internal memory. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. Optionally, the computer device may be a Personal Computer (PC), a personal digital assistant, other terminal devices such as a tablet computer (PAD), a mobile phone, and the like, and may also be a cloud or a remote server, where a specific form of the computer device is not limited in this embodiment of the application.
In an embodiment, as shown in fig. 2, a method for processing a sky cloud image is provided, and the embodiment relates to a specific process of obtaining a de-distorted sky image by stitching and de-distorting a plurality of sky cloud images captured by a capturing device. Taking the example that the method is applied to the computer device in fig. 1 as an example, the method comprises the following steps:
s101, acquiring multiple sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles.
Wherein, the above-mentioned shooting equipment can be a camera or a video camera, optionally, the shooting equipment can be a rotary camera, which is located on a rotary platform, and has an included angle ang with the rotary platformcamIs 45 degrees, the shooting equipment collects 360 degrees of a week and uniformly shoots for n times, the focal length is f, and the size of the built-in photosensitive sensor is camw×camhThe shooting device can shoot a plurality of two-dimensional images, namely a plurality of sky cloud pictures, by rotating different angles, and then transmit the plurality of sky cloud pictures to the computer device.
S102, carrying out image splicing processing on a plurality of sky cloud pictures to obtain a sky panoramic image; the sky panoramic image is a spherical image.
Specifically, since the multiple sky cloud images are two-dimensional images, and in an actual scene, the panoramic sky should be approximately spherical, the computer device may perform image stitching processing on the multiple sky cloud images, that is, sequentially stitching adjacent sky cloud images obtained by shooting, and stitching the last sky cloud image and the first sky cloud image to obtain a sky panoramic image, where the sky panoramic image is a spherical image. In the image stitching process, if the two-dimensional images are stitched into the spherical images, the two-dimensional images need to be bent, and then the positions or values of the pixel points can be correspondingly changed. Optionally, the computer device may perform stitching processing on the multiple sky cloud images by using an image stitching technique to obtain a sky panoramic image.
Optionally, the computer device may further map the plurality of sky cloud images into spherical images according to shooting parameters of the shooting device, and determine values of the pixels in the sky panoramic image based on the values of the pixels in each spherical image, so as to obtain the sky panoramic image. When the sky image is mapped to the spherical image, it is necessary to determine a target position where each pixel point in the sky image is mapped to the spherical image, and optionally, the target position (x)src,ysrc) The method can be obtained by the computer equipment through the following calculation method:
(xsrc,ysrc)=(W/2.0-deltax,H/2.0-deltay),
deltax=|op*sin(γ1)*sin(β1)|,
Figure BDA0002489966850000051
Figure BDA0002489966850000052
γ1=i/fpix,β1=j/fpix
i∈rowpanoramic,j∈colpanoramic/2,
rowpanoramic=fpix*π,colpanoramic=fpix*2π,
fpix=camw/2/(tan(β/360.0*π)),β=arctan(camw/2.0/f)/π*180*2;
wherein f ispixIs the focal length of a pixel unit, colpanoramic,rowpanoramicWidth and height of sky panorama image in pixel units, β1Is the included angle ∈ [0,2 pi ] between the tangent of the target position and the horizontal line],γ1Is the included angle ∈ [0, pi ] between the tangent of the target position and the vertical line direction]Op is the distance from the target position to the center of the sphere, deltaxAnd deltayDeviation values of the target position in the horizontal line direction and the vertical line direction are respectively. After determining that each pixel point in the sky image is mapped to a target position in the spherical image through the process, assigning the value of each pixel point to a point corresponding to the target position to obtain a corresponding spherical image; the computer device can then take (x) from each spherical imagesrc,ysrc) Value pix ofnowAnd assigning values to corresponding pixel points of the sky panoramic image through an interpolation algorithm, thereby obtaining a final sky panoramic image. Alternatively, the interpolation algorithm may be a bilinear interpolation algorithm, or may be other types of interpolation algorithms.
And S103, performing distortion removal processing on the sky panoramic image to obtain a distorted sky image.
Specifically, the obtained sky panoramic image may be distorted due to the fact that the shooting device is susceptible to shooting noise in the process of shooting the sky cloud image and splicing errors in image splicing, and the computer device can perform distortion removal processing on the sky panoramic image for accurately performing subsequent cloud amount calculation and cloud shape estimation. The computer device may project the sky panoramic image to a target area by using an image projection technology, and comprehensively considers the relative position of each pixel point and the sphere center during projection to obtain a projected planar image, which is the undistorted sky image.
In the method for processing the sky cloud image provided by this embodiment, a computer device performs image stitching processing on a plurality of acquired sky cloud images to obtain a sky panoramic image, where the sky cloud image is a two-dimensional image obtained by rotating a same shooting device at different angles, and the sky panoramic image is a spherical image, and the obtained sky panoramic image is more fit to an actual existing all-sky scene through stitching conversion; then, the sky panoramic image is subjected to distortion removal processing to obtain a distorted sky image, the obtained sky image is more accurate, and subsequent cloud amount calculation and cloud shape estimation are carried out by using the distorted sky image, so that the accuracy of an estimation result can be greatly improved; the accuracy of future weather prediction is further improved.
In one embodiment, a specific process of a computer device for performing distortion removal processing on a sky panoramic image to obtain a distorted sky image is involved. Alternatively, as shown in fig. 3, the S103 includes:
s201, according to shooting parameters of shooting equipment, second pixel points corresponding to the first pixel points in the target image are determined from the sky panoramic image; the target image is a two-dimensional image obtained by projecting and mapping the sky panoramic image.
Specifically, the shooting parameters of the shooting device may include, but are not limited to, the focal length f and the built-in photosensor size cam in the above-described embodimentw×camhThe computer device passes β arctan (cam)w2.0/f)/pi 180 2 and fpix=camwThe relation of/2/(tan (β/360.0. pi.)) can determine the focal length f of a pixel unitpix. Then at fpixDetermining a square area for the side length of 2 times, so that the sky panoramic image is projected and mapped into the area to form a two-dimensional circular target image, wherein the diameter of the target image is equal to the side length of the square area, and the central point of the target image is superposed; and each first pixel point in the target image has a corresponding second pixel point in the sky panoramic image.
Alternatively, the computer device may perform tangent function operation and arc tangent function operation on the shooting parameters to determine the size and the central point of the target image, i.e. determine f by the relational expressionpixA 1 is to fpix2 times as the diameter (i.e., size) and the center point of the target image are denoted as (x)c,yc) At this time, the size and the center point position of the target image are determined. Then, the computer equipment determines the relative positions of the first pixel points and the central point through the operation of an inverse cosine function and the operation of an inverse tangent function according to the size of the target image and the position of the central point on the basis of each first pixel point in the target image; optionally, the distance from any first pixel point to the central point can be determined by
Figure BDA0002489966850000071
The angle between the connecting line of the first pixel point and the center point and the horizontal line can be obtained by θ ═ arccos ((x-x)c)/r),θ∈[0,2π]The angle to the vertical line direction can be obtained by β ═ arctan (r/(f)pix-r)),β∈[0,π]The relation of (a) is obtained, and therefore the relative position of each first pixel point and the central point can be obtained. Finally based on the relative position of the first pixel point and the central point and the shooting parameter fpixThe position of a second pixel point in the sky panorama image corresponding to the first pixel point may be determined, optionally by (x)panoramic,ypanoramic)=(θ*fpix,β*fpix) The position of the second pixel point is obtained by the relational expression of (a).
S202, determining the value of each first pixel point according to the value of the second pixel point.
And S203, generating a distorted sky image according to the value of each first pixel point in the target image.
Specifically, after the position of the second pixel point in the sky panoramic image corresponding to the first pixel point is obtained, the pixel value of the position may also be obtained, and then the computer device may determine the value of each first pixel point according to the value of each second pixel point, and optionally, may determine the value of each first pixel point through a difference algorithm, where the interpolation algorithm may be a bilinear interpolation algorithm, or may be other types of interpolation algorithms. Then, the computer device may generate a distorted sky image according to the value of each first pixel point in the target image, and may be understood as filling the value of the first pixel point into the corresponding pixel point position, and the obtained image is the distorted sky image.
In the method for processing a sky cloud image provided in this embodiment, the computer device first determines, according to the shooting parameters of the shooting device, second pixel points corresponding to first pixel points of the target image from the sky panorama image, and then determines values of the first pixel points according to the values of the second pixel points, thereby determining values of the first pixel points in the target object, and then may generate a distortionless sky image according to the values of the first pixel points. In the embodiment, the sky panoramic image is subjected to distortion removal through the corresponding relation between the target image and the sky panoramic image, so that the accuracy of the distorted sky image can be improved, and an accurate data base is further provided for a subsequent data processing process.
In an actual scene, in a sky cloud picture shot by a shooting device, a plurality of objects such as ground buildings, mountains, oceans and the like are usually arranged near the bottom, and the influence of the objects on the cloud picture needs to be removed in actual cloud amount calculation and cloud shape estimation, and the objects are difficult to distinguish and remove when fog exists. The embodiment further provides a specific process of performing defogging processing on the sky cloud image, and optionally, as shown in fig. 4, the method may further include:
s301, according to values of pixel points in the sky panoramic image, extracting characteristics of a sky area of the sky panoramic image to obtain a sky area mask.
Specifically, the computer device may analyze a distribution of pixel values according to values of each pixel in the sky panoramic image to perform feature extraction of a sky region on the sky panoramic image, divide the pixels belonging to the sky feature into the sky region, set the obtained pixel values of the sky region to 1 in a unified manner, and set the obtained pixel values of the rest of the sky region to 0 in a unified manner, thereby obtaining a sky region mask. Optionally, the computer device may further segment the sky panorama image by using an image segmentation network to obtain a sky area mask. The size of the sky area mask is the same as that of the sky panoramic image, namely, the positions of all pixel points are in one-to-one correspondence.
And S302, defogging the sky panoramic image to obtain a defogged sky panoramic image.
Specifically, the computer device may perform defogging processing on the sky panorama image according to the atmospheric transmittance and the atmospheric light value when the shooting device shoots the sky cloud image, and optionally, the computer device may acquire the atmospheric transmittance and the atmospheric light value at the shooting time from the meteorological system, and then perform defogging processing on the sky panorama image according to the atmospheric transmittance and the atmospheric light value
Figure BDA0002489966850000091
Obtaining the defogged sky panoramic image according to the relational expression, wherein I (x) is the original sky panoramic image, A is the atmospheric light value, L (x) is the atmospheric transmittance, and F (x) is the defogged sky panoramic image.
And S303, obtaining a defogged sky image based on the corresponding relation of pixel points between the sky area mask and the defogged sky panoramic image.
Specifically, because the size of the sky area mask is the same as that of the sky panoramic image, that is, the size of the sky panoramic image after defogging is also the same as that of the sky panoramic image, the pixel positions between the sky area mask and the sky panoramic image after defogging are also in one-to-one correspondence. And a sky area in the sky area mask can correspond to a sky area image from the defogged sky panoramic image, namely the defogged sky image.
Correspondingly, after the defogged sky image is obtained, the computer equipment performs distortion removal treatment on the defogged sky image to obtain the defogged and distortion removed sky image.
In the method for processing a sky cloud image provided by this embodiment, the computer device may further perform feature extraction on a sky region of the sky panoramic image according to values of pixel points in the sky panoramic image to obtain a sky region mask; defogging the sky panoramic image to obtain a defogged sky panoramic image; and finally, obtaining the defogged sky image based on the corresponding relation of pixel points between the sky area mask and the defogged sky panoramic image. Therefore, fog in the sky image can be removed, the cloud in the sky image is clearer, and the accuracy of subsequent cloud amount calculation and cloud estimation is further improved.
In one embodiment, the computer device may also extract the sky region mask based on a dark channel method and based on a region growing method, it being noted that the extraction may be performed based on the dark channel method alone or in combination with a single channel method and a region growing method. Two methods are described below:
fig. 5 is a specific process of extracting a sky area mask based on a dark channel method in an embodiment, where optionally, the step S301 may include:
s401, aiming at each pixel point in the sky panoramic image, taking the minimum value of RGB values of each pixel point as a target pixel value, and generating a dark channel image.
Specifically, each pixel point in the sky panoramic image contains R, G, B three values, the computer device selects the minimum value of the three values as a target pixel value, and generates a target image from each target pixel value, so that the pixel values in the target image are the minimum values of RGB values, and the target image is called a dark channel image.
Optionally, after obtaining the dark channel image, the computer device may further perform filtering processing on the dark channel image by using a preset-size filtering core, and perform gaussian smoothing on the filtered image to obtain an optimized dark channel image.
S402, clustering and dividing the dark channel image to obtain a sky area mask.
Specifically, the computer device may perform cluster division on the obtained dark channels, such as a k-means clustering method, where the number of cluster categories is set to 2, which represents a sky category and a background category; and after clustering is finished, taking pixel points corresponding to the sky category as pixel points of a sky area mask, and setting the pixel point values to be 1 to obtain the sky area mask.
Fig. 6 is a detailed process of extracting a sky area mask based on a dark channel method in an embodiment, where optionally, the step S301 may include:
s501, aiming at each pixel point in the sky panoramic image, taking the minimum value of RGB values of each pixel point as a target pixel value, and generating a dark channel image.
S502, clustering and dividing the dark channel image to obtain a first sky area mask.
The implementation method of steps S501 and S502 can refer to the description in fig. 5, and the implementation process and the implementation principle are similar, and are not described herein again.
S503, converting the values of the pixel points in the sky panoramic image into gray values, and determining a gray value distribution map according to the gray values of the pixel points.
Specifically, usually, the sky panoramic image is an RGB color image, and the gray value of the gray image is between [0,255], the computer device may convert the RGB value of the sky panoramic image into the gray value by a floating point operation or a shift method, and determine a gray value distribution diagram according to the gray value corresponding to each pixel point, that is, the distribution of each gray value, such as which pixel points with the gray value of 0 are, which pixel points with the gray value of 1 are, and the like.
Optionally, the computer device may further determine a gray image according to the obtained gray value of each pixel point, and determine the gray value distribution map after performing gaussian smoothing on the gray image, so as to improve the accuracy of the obtained gray value distribution map.
S504, determining sky region candidate points according to the gray value distribution map; the sky area candidate point is a pixel point with the maximum gray value.
And S505, calculating a pixel difference value between the sky area candidate point and an adjacent pixel point of the sky area candidate point, and determining a second sky area mask according to the magnitude relation between the pixel difference value and a preset difference value.
Specifically, the computer device may determine a pixel point with the maximum gray value from the gray value distribution map, and use the pixel point with the maximum gray value as a sky region candidate point, and optionally, the number of sky region candidate points may be multiple. And then calculating a pixel difference value between each sky area candidate point and an adjacent pixel point, and if the pixel difference value is smaller than a preset difference value (if the preset difference value can be set to be 5), taking the adjacent pixel point as the pixel point in the sky area. And then, averaging the gray values of the pixels determined as the sky area, calculating a pixel difference value between the average value and the next adjacent pixel, if the pixel difference value is smaller than a preset difference value, taking the adjacent pixel as the pixel in the sky area, and so on until all the pixels in the sky panoramic image are traversed, so as to obtain all the pixels in the sky area, and setting the values of the pixels as 1, so as to obtain a second sky area mask.
And S506, taking the fusion result of the first sky region mask and the second sky region mask as a sky region mask.
Specifically, the computer device may perform weighted summation on corresponding pixel point values in the first sky region mask and the second sky region mask to obtain a final sky region mask.
In the sky cloud image processing method provided in this embodiment, the computer device may obtain the sky region mask based on the dark channel method and the region growing method, comprehensively consider the characteristics of each pixel point in the sky panoramic image, and comprehensively determine the sky region mask result in many ways, so that the accuracy of the mask result may be further improved, and the accuracy of the obtained sky image may be improved at the same time.
In one embodiment, on the basis of the embodiments of fig. 5 and 6, the computer device may further determine the atmospheric transmittance and the atmospheric light value through the dark channel image and the sky area mask to perform defogging processing on the sky panoramic image. As shown in fig. 7, the S302 may include:
s601, determining atmospheric transmittance according to the dark channel image and the sky area mask; the atmospheric transmittance satisfies a relation including a minimum function of the dark channel image and the sky region mask.
In particular, the computer device may include a dark channel image D2And the minimum function of the sky region mask D determines the atmospheric transmittance. Optionally, after obtaining the dark channel image, the computer device may further obtain a dark channel image D2Mean value of davAnd dark channel image D2Normalized result D ofaveThen according to l (x) min (min (ρ d)av,0.9)DaveAnd D) determining the atmospheric transmittance L (x), wherein rho is an adjusting factor, the defogging effect is more obvious when the value of rho is larger, and the obtained image is whitish and hazy when the value of rho is smaller.
S602, determining an atmospheric light value according to the RGB value of each pixel point in the sky panoramic image and the dark channel image; the relational expression satisfied by the atmospheric light value comprises the RGB value of each pixel point and the maximum function of the dark channel image.
Specifically, the computer device may determine the atmospheric light value according to a maximum function including an RGB value of each pixel point in the sky panoramic image and the dark channel image. Alternatively, the computer device may be according to the package
Figure BDA0002489966850000131
Determining an atmospheric light value A by using the relational expression of (A); wherein, Ic(x) The RGB value or the color channel value of each pixel point in the sky panoramic image.
And S603, obtaining the defogged sky panoramic image according to a relational expression containing the ratio of the atmospheric transmittance to the atmospheric light value.
Specifically, the computer device may obtain the defogged sky panorama image according to a relation including a ratio of the atmospheric transmittance l (x) to the atmospheric light value a. Alternatively, may be according to
Figure BDA0002489966850000132
The relationship of (a) and (b) is to obtain a defogged sky panoramic image F (x), wherein I (x) is an original sky panoramic image.
In the method for processing a sky cloud image provided in this embodiment, the computer device may determine the atmospheric transmittance based on the dark channel image and the sky area mask, determine the atmospheric light value according to the RGB value of each pixel point in the sky panoramic image and the dark channel image, and obtain the defogged sky panoramic image according to a relational expression including a ratio of the atmospheric transmittance to the atmospheric light value. Therefore, the situation that defogging processing cannot be carried out when the atmospheric transmittance and the atmospheric light value in the meteorological system cannot be acquired can be avoided, and the method and the device can be applied to scenes in which defogging processing is carried out on the sky panoramic image in any scene.
To better understand the whole flow of the whole sky cloud image processing method, the method is described in an overall embodiment, as shown in fig. 8, and the method includes:
s701, performing image splicing processing on a plurality of sky cloud images to obtain a sky panoramic image;
s702, according to values of pixel points in the sky panoramic image, extracting characteristics of a sky area of the sky panoramic image to obtain a sky area mask;
s703, defogging the sky panoramic image to obtain a defogged sky panoramic image;
s704, obtaining a defogged sky image based on the corresponding relation between the sky area mask and the pixel points of the defogged sky panoramic image;
s705, according to shooting parameters of shooting equipment, second pixel points corresponding to the first pixel points in the target image are determined from the defogged sky image; the target image is a two-dimensional image obtained by performing projection mapping on the defogged sky image;
s706, determining the value of each first pixel point through an interpolation algorithm according to the value of the second pixel point;
and S707, generating a distorted sky image according to the value of each first pixel point in the target image.
For the implementation process of each step in this embodiment, reference may be made to the description in the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be understood that although the various steps in the flowcharts of fig. 2-8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-8 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 9, there is provided a processing apparatus of a sky cloud map, including: the image processing system comprises an acquisition module 11, an image stitching module 12 and a distortion removal module 13.
Specifically, the acquiring module 11 is configured to acquire multiple sky cloud images; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles.
The image splicing module 12 is configured to perform image splicing processing on multiple sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image.
And the distortion removing module 13 is configured to perform distortion removing processing on the sky panoramic image to obtain a distorted sky image.
The processing apparatus for sky cloud images provided in this embodiment may implement the above method embodiments, and the implementation principle and technical effects are similar, which are not described herein again.
In an embodiment, the distortion removing module 13 is specifically configured to determine, according to shooting parameters of the shooting device, second pixel points corresponding to the first pixel points in the target image from the sky panorama image; the target image is a two-dimensional image obtained by projecting and mapping the sky panoramic image; determining the value of each first pixel point according to the value of the second pixel point; and generating a distorted sky image according to the value of each first pixel point in the target image.
In one embodiment, the distortion removing module 13 is specifically configured to perform tangent function operation and arc tangent function operation on the shooting parameters, and determine the size and the central point of the target image; based on each first pixel point, determining the relative position of the first pixel point and the central point through the operation of an inverse cosine function and the operation of an inverse tangent function according to the size of the target image and the position of the central point; and determining a second pixel point corresponding to the first pixel point based on the product of the relative position of the first pixel point and the central point and the shooting parameter.
In one embodiment, the apparatus further includes a defogging module, configured to perform feature extraction of a sky region on the sky panoramic image according to values of pixel points in the sky panoramic image, so as to obtain a sky region mask; defogging the sky panoramic image to obtain a defogged sky panoramic image; obtaining a defogged sky image based on the corresponding relation of pixel points between the sky area mask and the defogged sky panoramic image; correspondingly, the distortion removing module 13 is specifically configured to perform distortion removing processing on the defogged sky image to obtain a distorted sky image.
In one embodiment, the defogging module is specifically configured to generate a dark channel image by taking a minimum value of RGB values of each pixel point as a target pixel value for each pixel point in the sky panoramic image; and clustering and dividing the dark channel images to obtain a sky area mask.
In one embodiment, the defogging module is specifically configured to generate a dark channel image by taking a minimum value of RGB values of each pixel point as a target pixel value for each pixel point in the sky panoramic image; clustering and dividing the dark channel images to obtain a first sky area mask; converting the values of the pixel points in the sky panoramic image into gray values, and determining a gray value distribution map according to the gray values of the pixel points; determining sky area candidate points according to the gray value distribution diagram; the candidate point of the sky area is a pixel point with the maximum gray value; calculating a pixel difference value between the sky area candidate point and an adjacent pixel point of the sky area candidate point, and determining a second sky area mask according to the magnitude relation between the pixel difference value and a preset difference value; and taking the fusion result of the first sky area mask and the second sky area mask as a sky area mask.
In one embodiment, the defogging module is specifically configured to determine an atmospheric transmittance according to the dark channel image and the sky region mask; the relation formula satisfied by the atmospheric transmittance comprises a minimum function of the dark channel image and the sky area mask; determining an atmospheric light value according to the RGB value of each pixel point in the sky panoramic image and the dark channel image; the relational expression satisfied by the atmospheric light value comprises the RGB value of each pixel point and the maximum function of the dark channel image; and obtaining the defogged sky panoramic image according to a relational expression containing the ratio of the atmospheric transmittance to the atmospheric light value.
In an embodiment, the image stitching module 12 is specifically configured to map a plurality of sky cloud images into spherical images according to shooting parameters of a shooting device; and determining the value of each pixel point in the sky panoramic image based on the value of each pixel point in each spherical image to obtain the sky panoramic image.
For specific limitations of the sky cloud image processing apparatus, reference may be made to the above limitations of the sky cloud image processing method, which is not described herein again. The modules in the processing device of the sky cloud map can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of processing a sky cloud map. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles;
carrying out image splicing processing on a plurality of sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image;
and carrying out distortion removal treatment on the sky panoramic image to obtain a distorted sky image.
The implementation principle and technical effect of the computer device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to shooting parameters of shooting equipment, second pixel points corresponding to the first pixel points in the target image are determined from the sky panoramic image; the target image is a two-dimensional image obtained by projecting and mapping the sky panoramic image;
determining the value of each first pixel point according to the value of the second pixel point;
and generating a distorted sky image according to the value of each first pixel point in the target image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing tangent function operation and arc tangent function operation on the shooting parameters, and determining the size and the central point of the target image;
based on each first pixel point, determining the relative position of the first pixel point and the central point through the operation of an inverse cosine function and the operation of an inverse tangent function according to the size of the target image and the position of the central point;
and determining a second pixel point corresponding to the first pixel point based on the product of the relative position of the first pixel point and the central point and the shooting parameter.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to the value of a pixel point in the sky panoramic image, extracting the characteristics of a sky area of the sky panoramic image to obtain a sky area mask;
defogging the sky panoramic image to obtain a defogged sky panoramic image;
obtaining a defogged sky image based on the corresponding relation of pixel points between the sky area mask and the defogged sky panoramic image;
correspondingly, carry out the distortion removal to sky panorama image, obtain the sky image after the distortion removal, include:
and carrying out distortion removal treatment on the defogged sky image to obtain a distorted sky image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
aiming at each pixel point in the sky panoramic image, taking the minimum value of RGB values of each pixel point as a target pixel value, and generating a dark channel image;
and clustering and dividing the dark channel images to obtain a sky area mask.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
aiming at each pixel point in the sky panoramic image, taking the minimum value of RGB values of each pixel point as a target pixel value, and generating a dark channel image;
clustering and dividing the dark channel images to obtain a first sky area mask;
converting the values of the pixel points in the sky panoramic image into gray values, and determining a gray value distribution map according to the gray values of the pixel points;
determining sky area candidate points according to the gray value distribution diagram; the candidate point of the sky area is a pixel point with the maximum gray value;
calculating a pixel difference value between the sky area candidate point and an adjacent pixel point of the sky area candidate point, and determining a second sky area mask according to the magnitude relation between the pixel difference value and a preset difference value;
and taking the fusion result of the first sky area mask and the second sky area mask as a sky area mask.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining atmospheric transmittance according to the dark channel image and the sky area mask; the relation formula satisfied by the atmospheric transmittance comprises a minimum function of the dark channel image and the sky area mask;
determining an atmospheric light value according to the RGB value of each pixel point in the sky panoramic image and the dark channel image; the relational expression satisfied by the atmospheric light value comprises the RGB value of each pixel point and the maximum function of the dark channel image;
and obtaining the defogged sky panoramic image according to a relational expression containing the ratio of the atmospheric transmittance to the atmospheric light value.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
respectively mapping a plurality of sky cloud images into spherical images according to shooting parameters of shooting equipment;
and determining the value of each pixel point in the sky panoramic image based on the value of each pixel point in each spherical image to obtain the sky panoramic image.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles;
carrying out image splicing processing on a plurality of sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image;
and carrying out distortion removal treatment on the sky panoramic image to obtain a distorted sky image.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to shooting parameters of shooting equipment, second pixel points corresponding to the first pixel points in the target image are determined from the sky panoramic image; the target image is a two-dimensional image obtained by projecting and mapping the sky panoramic image;
determining the value of each first pixel point according to the value of the second pixel point;
and generating a distorted sky image according to the value of each first pixel point in the target image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing tangent function operation and arc tangent function operation on the shooting parameters, and determining the size and the central point of the target image;
based on each first pixel point, determining the relative position of the first pixel point and the central point through the operation of an inverse cosine function and the operation of an inverse tangent function according to the size of the target image and the position of the central point;
and determining a second pixel point corresponding to the first pixel point based on the product of the relative position of the first pixel point and the central point and the shooting parameter.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to the value of a pixel point in the sky panoramic image, extracting the characteristics of a sky area of the sky panoramic image to obtain a sky area mask;
defogging the sky panoramic image to obtain a defogged sky panoramic image;
obtaining a defogged sky image based on the corresponding relation of pixel points between the sky area mask and the defogged sky panoramic image;
correspondingly, carry out the distortion removal to sky panorama image, obtain the sky image after the distortion removal, include:
and carrying out distortion removal treatment on the defogged sky image to obtain a distorted sky image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
aiming at each pixel point in the sky panoramic image, taking the minimum value of RGB values of each pixel point as a target pixel value, and generating a dark channel image;
and clustering and dividing the dark channel images to obtain a sky area mask.
In one embodiment, the computer program when executed by the processor further performs the steps of:
aiming at each pixel point in the sky panoramic image, taking the minimum value of RGB values of each pixel point as a target pixel value, and generating a dark channel image;
clustering and dividing the dark channel images to obtain a first sky area mask;
converting the values of the pixel points in the sky panoramic image into gray values, and determining a gray value distribution map according to the gray values of the pixel points;
determining sky area candidate points according to the gray value distribution diagram; the candidate point of the sky area is a pixel point with the maximum gray value;
calculating a pixel difference value between the sky area candidate point and an adjacent pixel point of the sky area candidate point, and determining a second sky area mask according to the magnitude relation between the pixel difference value and a preset difference value;
and taking the fusion result of the first sky area mask and the second sky area mask as a sky area mask.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining atmospheric transmittance according to the dark channel image and the sky area mask; the relation formula satisfied by the atmospheric transmittance comprises a minimum function of the dark channel image and the sky area mask;
determining an atmospheric light value according to the RGB value of each pixel point in the sky panoramic image and the dark channel image; the relational expression satisfied by the atmospheric light value comprises the RGB value of each pixel point and the maximum function of the dark channel image;
and obtaining the defogged sky panoramic image according to a relational expression containing the ratio of the atmospheric transmittance to the atmospheric light value.
In one embodiment, the computer program when executed by the processor further performs the steps of:
respectively mapping a plurality of sky cloud images into spherical images according to shooting parameters of shooting equipment;
and determining the value of each pixel point in the sky panoramic image based on the value of each pixel point in each spherical image to obtain the sky panoramic image.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of processing a sky cloud map, the method comprising:
acquiring a plurality of sky cloud pictures; the sky cloud picture is a two-dimensional image obtained by shooting equipment from different angles;
performing image splicing processing on the multiple sky cloud images to obtain a sky panoramic image; the sky panoramic image is a spherical image;
and carrying out distortion removal treatment on the sky panoramic image to obtain a distorted sky image.
2. The method of claim 1, wherein said de-distorting said sky panorama image to obtain a de-distorted sky image, comprises:
according to the shooting parameters of the shooting equipment, second pixel points corresponding to the first pixel points in the target image are determined from the sky panoramic image; the target image is a two-dimensional image obtained by projecting and mapping the sky panoramic image;
determining the value of each first pixel point according to the value of the second pixel point;
and generating the undistorted sky image according to the value of each first pixel point in the target image.
3. The method of claim 2, wherein the determining, from the sky panorama image according to the shooting parameters of the shooting device, second pixel points corresponding to first pixel points in a target image comprises:
performing tangent function operation and arc tangent function operation on the shooting parameters, and determining the size and the central point of the target image;
based on each first pixel point, determining the relative position of the first pixel point and the central point through the operation of an inverse cosine function and the operation of an inverse tangent function according to the size of the target image and the position of the central point;
and determining a second pixel point corresponding to the first pixel point based on the product of the relative position of the first pixel point and the central point and the shooting parameter.
4. The method of claim 1, wherein after the image stitching the plurality of sky cloud images to obtain a sky panorama image, the method further comprises:
according to the value of a pixel point in the sky panoramic image, extracting the characteristics of a sky area of the sky panoramic image to obtain a sky area mask;
defogging the sky panoramic image to obtain a defogged sky panoramic image;
obtaining a defogged sky image based on the corresponding relation of pixel points between the sky area mask and the defogged sky panoramic image;
correspondingly, it is right the sky panorama image carries out the distortion removal processing, obtains the sky image after the distortion removal, includes:
and carrying out distortion removal treatment on the defogged sky image to obtain a distorted sky image.
5. The method of claim 4, wherein said extracting the feature of the sky area from the sky panorama image according to the values of the pixels in the sky panorama image to obtain a sky area mask comprises:
aiming at each pixel point in the sky panoramic image, taking the minimum value of RGB values of each pixel point as a target pixel value, and generating a dark channel image;
and clustering and dividing the dark channel image to obtain a sky area mask.
6. The method of claim 4, wherein said extracting the feature of the sky area from the sky panorama image according to the values of the pixels in the sky panorama image to obtain a sky area mask comprises:
aiming at each pixel point in the sky panoramic image, taking the minimum value of RGB values of each pixel point as a target pixel value, and generating a dark channel image;
clustering and dividing the dark channel image to obtain a first sky area mask;
converting the values of the pixel points in the sky panoramic image into gray values, and determining a gray value distribution map according to the gray values of the pixel points;
determining sky region candidate points according to the gray value distribution diagram; the candidate point of the sky area is a pixel point with the maximum gray value;
calculating a pixel difference value between the sky area candidate point and an adjacent pixel point of the sky area candidate point, and determining a second sky area mask according to the magnitude relation between the pixel difference value and a preset difference value;
and taking a fusion result of the first sky area mask and the second sky area mask as the sky area mask.
7. The method of claim 5 or 6, wherein the defogging the sky panorama image to obtain a defogged sky panorama image comprises:
determining atmospheric transmittance according to the dark channel image and the sky area mask; the relationship satisfied by the atmospheric transmittance includes a minimum function of the dark channel image and the sky region mask;
determining an atmospheric light value according to the RGB value of each pixel point in the sky panoramic image and the dark channel image; the relational expression satisfied by the atmospheric light value comprises the RGB value of each pixel point and the maximum function of the dark channel image;
and obtaining the defogged sky panoramic image according to a relational expression containing the ratio of the atmospheric transmittance to the atmospheric light value.
8. The method of claim 1, wherein the image stitching the plurality of sky clouds to obtain a sky panorama image comprises:
respectively mapping the plurality of sky cloud images into spherical images according to the shooting parameters of the shooting equipment;
and determining the value of each pixel point in the sky panoramic image based on the value of each pixel point in each spherical image to obtain the sky panoramic image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202010402326.2A 2020-05-13 2020-05-13 Sky cloud image processing method, computer device and readable storage medium Pending CN111598777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010402326.2A CN111598777A (en) 2020-05-13 2020-05-13 Sky cloud image processing method, computer device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010402326.2A CN111598777A (en) 2020-05-13 2020-05-13 Sky cloud image processing method, computer device and readable storage medium

Publications (1)

Publication Number Publication Date
CN111598777A true CN111598777A (en) 2020-08-28

Family

ID=72188653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010402326.2A Pending CN111598777A (en) 2020-05-13 2020-05-13 Sky cloud image processing method, computer device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111598777A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233020A (en) * 2020-11-09 2021-01-15 珠海大横琴科技发展有限公司 Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium
CN112907445A (en) * 2021-02-08 2021-06-04 杭州海康威视数字技术股份有限公司 Method, device and equipment for splicing sky cloud pictures
CN112907447A (en) * 2021-02-08 2021-06-04 杭州海康威视数字技术股份有限公司 Splicing of sky cloud pictures and method for determining installation positions of multiple cameras
CN113870439A (en) * 2021-09-29 2021-12-31 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
WO2023193648A1 (en) * 2022-04-08 2023-10-12 影石创新科技股份有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009017332A1 (en) * 2007-07-29 2009-02-05 Nanophotonics Co., Ltd. Methods of obtaining panoramic images using rotationally symmetric wide-angle lenses and devices thereof
CN101895693A (en) * 2010-06-07 2010-11-24 北京高森明晨信息科技有限公司 Method and device for generating panoramic image
CN103188433A (en) * 2011-12-30 2013-07-03 株式会社日立制作所 Image demisting device and image demisting method
CN103914813A (en) * 2014-04-10 2014-07-09 西安电子科技大学 Colorful haze image defogging and illumination compensation restoration method
JP2017138647A (en) * 2016-02-01 2017-08-10 三菱電機株式会社 Image processing device, image processing method, video photographing apparatus, video recording reproduction apparatus, program and recording medium
CN110458815A (en) * 2019-08-01 2019-11-15 北京百度网讯科技有限公司 There is the method and device of mist scene detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009017332A1 (en) * 2007-07-29 2009-02-05 Nanophotonics Co., Ltd. Methods of obtaining panoramic images using rotationally symmetric wide-angle lenses and devices thereof
CN101895693A (en) * 2010-06-07 2010-11-24 北京高森明晨信息科技有限公司 Method and device for generating panoramic image
CN103188433A (en) * 2011-12-30 2013-07-03 株式会社日立制作所 Image demisting device and image demisting method
CN103914813A (en) * 2014-04-10 2014-07-09 西安电子科技大学 Colorful haze image defogging and illumination compensation restoration method
JP2017138647A (en) * 2016-02-01 2017-08-10 三菱電機株式会社 Image processing device, image processing method, video photographing apparatus, video recording reproduction apparatus, program and recording medium
CN110458815A (en) * 2019-08-01 2019-11-15 北京百度网讯科技有限公司 There is the method and device of mist scene detection

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KAIMING HE等: "Single image haze removal using dark channel prior", pages 1956 - 1963 *
常戬;刘旺;白佳弘;: "基于图像融合技术的Retinex图像增强算法", vol. 40, no. 09, pages 1624 - 1635 *
杨德明等: "区域分割优化的暗通道先验去雾算法", vol. 45, no. 4, pages 1 *
杨达;王孝通;徐冠雷;战勇强;: "一种基于图像分割和图像拼接技术的全天空云量估计方法", vol. 4, no. 08, pages 15 - 21 *
陈青青等: "地图投影面上的全天空图像拼接", vol. 35, no. 8, pages 1 - 4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233020A (en) * 2020-11-09 2021-01-15 珠海大横琴科技发展有限公司 Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium
CN112907445A (en) * 2021-02-08 2021-06-04 杭州海康威视数字技术股份有限公司 Method, device and equipment for splicing sky cloud pictures
CN112907447A (en) * 2021-02-08 2021-06-04 杭州海康威视数字技术股份有限公司 Splicing of sky cloud pictures and method for determining installation positions of multiple cameras
CN113870439A (en) * 2021-09-29 2021-12-31 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
WO2023193648A1 (en) * 2022-04-08 2023-10-12 影石创新科技股份有限公司 Image processing method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN111052176B (en) Seamless image stitching
CN111598777A (en) Sky cloud image processing method, computer device and readable storage medium
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
US9454796B2 (en) Aligning ground based images and aerial imagery
US10762655B1 (en) Disparity estimation using sparsely-distributed phase detection pixels
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
US8818101B1 (en) Apparatus and method for feature matching in distorted images
CN109474780B (en) Method and device for image processing
CN112017135B (en) Method, system and equipment for spatial-temporal fusion of remote sensing image data
US8374428B2 (en) Color balancing for partially overlapping images
CN109698944B (en) Projection area correction method, projection apparatus, and computer-readable storage medium
CN111563552A (en) Image fusion method and related equipment and device
WO2018102990A1 (en) System and method for rectifying a wide-angle image
WO2022160857A1 (en) Image processing method and apparatus, and computer-readable storage medium and electronic device
CN114143528A (en) Multi-video stream fusion method, electronic device and storage medium
KR101324250B1 (en) optical axis error compensation method using image processing, the method of the same, and the zoom camera provided for the compensation function of the optical axis error
US20180114291A1 (en) Image processing method and device as well as non-transitory computer-readable medium
US10460487B2 (en) Automatic image synthesis method
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN111383254A (en) Depth information acquisition method and system and terminal equipment
KR101868740B1 (en) Apparatus and method for generating panorama image
US10868993B2 (en) Method controlling image sensor parameters
WO2016034709A1 (en) Depth map based perspective correction in digital photos
CN117152218A (en) Image registration method, image registration device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240419

AD01 Patent right deemed abandoned