CN113936031A - Cloud shadow track prediction method based on machine vision - Google Patents

Cloud shadow track prediction method based on machine vision Download PDF

Info

Publication number
CN113936031A
CN113936031A CN202111201206.7A CN202111201206A CN113936031A CN 113936031 A CN113936031 A CN 113936031A CN 202111201206 A CN202111201206 A CN 202111201206A CN 113936031 A CN113936031 A CN 113936031A
Authority
CN
China
Prior art keywords
cloud
image
prediction
track
shadow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111201206.7A
Other languages
Chinese (zh)
Inventor
陈彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Ruowei Information Technology Co ltd
Original Assignee
Weihai Ruowei Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Ruowei Information Technology Co ltd filed Critical Weihai Ruowei Information Technology Co ltd
Priority to CN202111201206.7A priority Critical patent/CN113936031A/en
Publication of CN113936031A publication Critical patent/CN113936031A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a cloud shadow track prediction method based on machine vision, which solves the technical problem that the conventional photovoltaic tracking system cannot accurately predict a cloud shadow track in a short time, and mainly comprises the following steps: s1, shooting by a binocular camera, S2, image preprocessing, S3, image correction, S4, cloud contour segmentation, S5, image matching, S6, three-dimensional reconstruction, S7, cloud moving trajectory prediction, and S8, cloud shadow trajectory prediction. The binocular camera can accurately recover the three-dimensional information of the cloud compared with the image of the cloud in the sky, and the high-precision prediction of the cloud shadow moving track is realized by combining machine learning. The system is simple, the investment cost is low, the high-precision prediction can be performed on the ground cloud shadow movement within a local range in a short period, the generation power prediction level of the photovoltaic power station is effectively improved, the guidance effect on power supply planning of a power grid is good, and the method can be widely applied to the technical field of photovoltaic power generation.

Description

Cloud shadow track prediction method based on machine vision
Technical Field
The invention relates to the technical field of machine vision, in particular to a cloud shadow track prediction method based on machine vision.
Background
The photovoltaic power station receives illumination based on a solar panel and converts light energy into electric energy, wherein the factor which mainly influences the power generation power is illumination intensity, a photovoltaic tracking system is arranged for well predicting power change of a general photovoltaic power station, an astronomical algorithm and an inverse tracking mode are usually adopted for tracking the sun, and the tracking mode meets the tracking requirement in sunny and cloudy weather. In the case of sunny, cloudy and cloudy weather, various forms of clouds in the sky can generate various forms of shadows on the ground, and in the working process of the solar panel component, in reality, direct solar radiation (direct solar radiation refers to radiation which is directly from the sun and has unchanged radiation direction) mainly plays a role, and when the power station is under the cloud shadow, the power generation effect can be seriously influenced, and the photovoltaic tracking system cannot predict when the cloud shadow covers the power station, the coverage range, the time and the like, so that the requirements are not met.
The existing technical scheme is to adopt short-term prediction to make up for the problem of insufficient power prediction, and the common methods are as follows:
(1) based on big data statistics, similar meteorological conditions are selected, and environment parameters (meteorological data on the day) are combined for prediction.
(2) And performing cloud movement prediction according to the satellite cloud picture so as to predict the time of the cloud shadow covering the photovoltaic power station and finally predict the power generation power. Specifically, for example, chinese patent CN110633862A discloses a light power prediction algorithm based on a satellite cloud picture, which uses a visible light satellite cloud picture as a prediction data model, predicts a cloud blocking condition above a certain specific photovoltaic power station for several hours in the future by analyzing and judging a change condition of the cloud picture for several hours in the past, and corrects a predicted value of irradiation intensity of the specific photovoltaic power station by an empirical model, so as to solve a problem that the light power cannot be predicted due to blocking of the photovoltaic power station by the cloud. Also, for example, the method for predicting the photovoltaic power of the meteorological satellite cloud map dynamic attention domain disclosed in chinese patent CN113298303A adopts a similar technical principle. According to the technical scheme principle of the same type, a training data set is established according to a satellite cloud picture under the open condition, the specific overhead cloud movement change is predicted, the condition that a solar panel is shielded by a cloud shadow is predicted, and the generated power prediction data is obtained. The prediction accuracy is improved to a certain extent, but the satellite cloud picture cannot distinguish the shielding height at the bottom of the cloud layer, and the correct cloud image position and contour cannot be obtained.
(3) And erecting shooting equipment on the ground, and predicting a cloud moving track by shooting a sky image. Specifically, as a Chinese patent CN 106372749A-an ultrashort-term photovoltaic power prediction method based on cloud change analysis and a Chinese patent CN 113159466A-a short-term photovoltaic power generation prediction system and method, the cloud image occlusion situation is predicted by shooting a sky image and calculating cloud movement characteristics through an image analysis technology. The technical scheme provides a short-term prediction scheme, the cost is low, and the system is easy to set up. However, as can be seen by combining the specific technical scheme disclosed, the main steps are as follows: the method comprises the steps of shooting an image, analyzing the image, extracting features, calculating pixel movement of cloud, predicting the shielding condition, wherein the speed and the direction of the cloud are obtained through pixel movement calculation of feature points in the image without considering the influence of real cloud height, and meanwhile, final prediction needs to be carried out by combining a machine learning model, input quantities comprise a large number of related quantities, an overfitting state easily occurs, and the prediction precision is finally influenced.
In summary, the prediction accuracy of the current short-term prediction method needs to be further improved.
Disclosure of Invention
The invention aims to solve the defects of the technology and realize accurate prediction of the ground cloud shadow movement track by combining a machine vision technology.
Therefore, the invention provides a cloud shadow track prediction method based on machine vision, which mainly comprises the following steps:
s1, shooting by using a binocular camera, and collecting a sky image in the area;
s2, preprocessing images, analyzing the sky image obtained in the S1 through a machine learning model, and feeding back the current weather type;
s3, correcting the image, when the weather type fed back in the S2 is little cloud or cloudy, correcting the image of the sky, when the weather type fed back in the S2 is not little cloud or cloudy, not processing the image, stopping the subsequent steps and waiting for the next feedback result;
s4, carrying out cloud contour segmentation, and carrying out cloud contour extraction on the corrected image obtained in the S3;
s5, image matching, namely, carrying out image matching on the binocular image of S4;
s6, performing three-dimensional reconstruction, and obtaining three-dimensional cloud data through parallax computation and geometric transformation according to the matching result of the S5;
s7, cloud moving track prediction, namely, bringing the three-dimensional data in the S6 into a machine learning model to obtain a predicted track of cloud moving in the next time period;
and S8, cloud shadow track prediction, wherein the predicted track of the ground cloud shadow movement is obtained through the cloud moving track projection transformation in the S7.
Preferably, the machine learning model in S2 is a residual neural network model, and the residual neural network model is previously trained in meteorological type classification through an image set.
Preferably, the image rectification in S3 includes the following steps:
s31, correcting distortion of the binocular camera;
and S32, correcting the parallel of the binocular camera.
Preferably, the cloud contour extraction in the step S4 adopts a semantic segmentation algorithm, and a segmentation algorithm model is built through a U-Net network.
Preferably, the feature points of the image matching in S5 are obtained by SIFT algorithm.
Preferably, the three-dimensional data in S6 includes a cloud height, and the cloud height calculation process includes the following steps:
s61, obtaining a depth map of the cloud through parallax calculation of the matched binocular images in the S5, and obtaining a height map of the cloud through coordinate transformation of the depth map;
and S62, selecting the heights of the feature points in the height map for fitting to obtain the average height of the cloud.
Preferably, the three-dimensional data further comprises cloud longitude and latitude; and according to the contour extracted in the S4, obtaining the cloud center-of-mass coordinates of the cloud through a plane center-of-mass formula, and obtaining the longitude and the latitude of the cloud center-of-mass through geometric conversion.
Preferably, the machine learning model in S7 is a recurrent neural network model; the cyclic neural network model carries out track point numerical value prediction training in advance, a training set consists of a plurality of arrays, and the input quantity of each array comprises the three-dimensional data and meteorological data; the meteorological data comprises but is not limited to one or more combinations of wind speed, wind direction, air pressure and temperature of each set of three-dimensional data corresponding to the moment.
Preferably, the step of correcting the predicted trajectory in S7 further includes the steps of:
s71, acquiring satellite cloud pictures at continuous time intervals, and establishing a cloud movement prediction model above the binocular camera shooting;
s72, carrying out weighted fitting on the second predicted trajectory obtained by the cloud moving prediction model in the S71 and the predicted trajectory to obtain the corrected predicted trajectory.
The invention has the beneficial effects that:
the system is simple, the investment cost is low, high-precision three-dimensional information recovery can be carried out on the cloud in the current area by means of machine binocular vision, so that the ground cloud shadow can be accurately obtained, high-precision prediction can be carried out on the movement of the ground cloud shadow within a local range in a short period by means of combination of the neural network prediction track, the generation power prediction level of the photovoltaic power station is effectively improved, and the long-term prediction result is matched to have a good guiding effect on power supply planning of a power grid.
Drawings
FIG. 1 is a flow chart of a method in an embodiment of the present invention;
FIG. 2 is a diagram of a residual neural network architecture in an embodiment of the present invention;
FIG. 3 is a cloud profile extraction test chart in an embodiment of the present invention;
fig. 4 is a comparison diagram of cloud centroid predicted trajectories in an embodiment of the invention.
Detailed Description
The invention is further described below in conjunction with the drawings and the specific embodiments to assist in understanding the contents of the invention. The method used in the invention is a conventional method if no special provisions are made; the raw materials and the apparatus used are, unless otherwise specified, conventional commercially available products.
Example 1
As shown in fig. 1, a cloud shadow trajectory prediction method based on machine vision mainly includes the following steps:
s1, shooting by using a binocular camera, firstly erecting a binocular camera system on the ground, and transmitting the acquired sky image in the region back to a server for analysis in a wired or wireless transmission mode. Wherein the acquisition strategy adopts timing acquisition.
S2, image preprocessing shows that solar radiation is almost totally refracted when weather is overcast, rainy, snowy, haze, sand storm and other severe weather, and the photovoltaic tracker receives mainly direct solar radiation, so that the photovoltaic tracker cannot receive the solar radiation when the weather is overcast, rainy, snowy, haze, sand storm and other severe weather.
The weather categories can be classified into the following three categories according to this:
a. the photovoltaic tracker can directly receive solar radiation by completely transmitting light (no clouds in sunny days).
b. With shading (less clouds and more clouds in a sunny day), the photovoltaic tracker needs to be angled to receive solar radiation. This category was our follow-up main study
c. Completely opaque (bad weather such as cloudy day, rainy day, snow day, haze, sand storm, etc.), photovoltaic tracker can't receive.
Preferably, a machine learning Residual neural Network (Residual Network) is used for classification training according to the above categories, and the structure diagram is shown in fig. 2. The training data are concentrated, and the training data are shot and acquired by erecting a camera on sunny days, cloudy days and rainy days; the snow, haze and sand storm are obtained from the network database.
And reading the picture as BGR of 3 channels of each pixel point, wherein each channel is described by a value of 0-255, and the BGR is transmitted into a neural network model to train the model and store the model. In the process of completing the test of the model, a picture of the photographed sky is introduced, a training result can be obtained through model prediction, and the accuracy rate of classification is verified to be 99.4% through a test set. Then in the application of the system, the system is used for feeding back the current weather type; weather types include sunny, cloudy, and so on.
S3, correcting the image, and when the feedback result in the S2 is little cloud or much cloud, correcting the image of the sky; when the feedback result in S2 is a sunny day, a cloudy day, or another weather type, image rectification is not performed, and the subsequent step is stopped to wait for the next feedback type. Preferably, the image rectification comprises the following steps:
s31, binocular camera distortion correction: the process of camera imaging is actually a process of converting points of a world coordinate system into a camera coordinate system, projecting to obtain an image coordinate system, and further converting the image coordinate system into a pixel coordinate system. And distortion is introduced due to lens precision and process (so-called distortion, which means that a straight line in a world coordinate system is converted into a straight line in other coordinate systems).
In the embodiment, a computer vision library OpenCV is adopted, and a checkerboard calibration method is used for calibrating the distorted image. The specific correction method comprises the steps of collecting images of the chessboard reference object from multiple angles, calling an OpenCV function to calculate the size of the images, an internal parameter matrix and a correction coefficient matrix, and then running a program to recover the deformed images.
S32, binocular camera parallel correction: the system has high requirements on camera erection, two camera cmos plates are required to be in rigid body parallel at the same height, and construction is difficult to realize. In order to facilitate construction and reduce erection difficulty, the embodiment corrects the coms by taking photos in a rotating manner, the distortion-corrected photos use a Scale-in-variance feature transform (Scale-in-variance transform) algorithm to find two pairs of feature points and connect the two pairs of feature points, the photo of the right camera is rotated by the optical center of the photo to enable the connection line of the feature points to be parallel to the connection line of the feature points of the other photo, then a Scale-in-variance feature transform (Scale-in-variance transform) algorithm is used to find a pair of feature points, and the two photos are rotated by the optical center until the feature points of the two photos are the same in the pixels of the longitudinal axis, so that the rigid bodies of the cmos plates of the cameras are considered to be parallel. The angles of the two rotations are recorded, and the pictures taken by the two cameras are corrected by the angles of the two rotations under the condition of human non-intervention.
S4, cloud contour segmentation, namely, cloud contour extraction is carried out on the corrected binocular image obtained in the S3, the cloud and non-cloud parts in each image are distinguished, namely, each pixel point is classified, so that a cloud boundary is formed, and finally the pixel points on the cloud boundary are marked. Preferably, the contour extraction adopts a semantic segmentation algorithm, a plurality of neural networks capable of realizing semantic segmentation are available, in the embodiment, a U-Net network is adopted for semantic segmentation, and a segmentation algorithm model is built through the U-Net network. Through tests, the effect is shown in fig. 3, the left image is a gray image of a shot image, the right image is a segmented outline image, a cloud part is marked as white, a day part is marked as black, and visible segmented boundary lines are clear, so that the boundary lines can be used for subsequent data calculation after being calibrated.
S5, image matching, namely, carrying out image matching on the binocular image of S4; specifically, the same-name points are identified in images shot by the left camera and the right camera through a matching algorithm. The matching algorithms are more in variety, and a sift-invariant feature transform (sift-invariant feature transform) algorithm is adopted in the embodiment, so that the method has better stability and invariance, can adapt to rotation, Scale scaling and brightness change, and is not interfered by visual angle change, affine transformation and noise to a certain extent. Considering that most images do not have one cloud and the characteristic that the sift algorithm has good distinguishability is adopted, the method can quickly and accurately distinguish information in a large quantity of feature databases, can match each cloud, and can generate a large quantity of feature vectors under the condition that only one cloud exists. By means of the high speed and the expandability of the algorithm, the method can be used in combination with other forms of feature vectors, and processing speed and efficiency are improved.
And S6, performing three-dimensional reconstruction, and obtaining three-dimensional cloud data through parallax calculation and geometric transformation according to the matching result of S5. In general, the three-dimensional data obtaining generally adopts a matched binocular image to obtain a depth map of a cloud through parallax computation, the depth map obtains a height map of the cloud in a geodetic coordinate system through coordinate transformation, which is equivalent to performing three-dimensional reconstruction on a visible part of a cloud bottom surface, but the method has a large calculation amount and affects prediction efficiency, so that in this embodiment, the calculation process preferably includes the following steps:
s61, selecting the feature points in the binocular image matched by the sift algorithm in S5, obtaining the depths of the point positions through parallax calculation, and obtaining the heights of the feature points in a geodetic coordinate system through coordinate transformation;
s62, due to the fact that the height and the thickness of the cloud are different, the shape is different, and the optimal state needs to determine the real height of the edge of the cloud layer so as to accurately calculate the shielding range of the ground cloud shadow. However, because the cloud layer edge image is fuzzy and has fewer characteristic points, and further processing is easy to introduce noise, so that the accuracy is reduced, part of the characteristic points in S61 are selected, and after an abnormal value is removed, a plane is fitted to obtain the average height of the cloud layer bottom surface.
Further, the three-dimensional data further comprises the longitude and latitude of the cloud. And obtaining the cloud centroid coordinates of the cloud through a plane centroid formula according to the contour extracted in the S4, and obtaining the longitude and the latitude of the cloud centroid through geometric conversion.
S7, cloud moving track prediction, namely, bringing the three-dimensional data in the S6 into a machine learning model to obtain a predicted track of cloud moving in the next time period; preferably, the cloud trajectory prediction is predicted using a recurrent neural network lstm (long Short Term memory). The recurrent neural network model carries out the prediction training of the trace point numerical value in advance, and the training set consists of a plurality of training setsThe method comprises the following steps that (1) array composition is carried out, wherein the input quantity of each array comprises three-dimensional data and meteorological data; the meteorological data includes, but is not limited to, one or more combinations of wind speed, wind direction, air pressure and temperature corresponding to the time of each set of three-dimensional data. In this embodiment, in order to increase the system computation speed, the model training set does not include meteorological data such as wind speed, wind direction, air pressure, temperature, etc. in the high altitude. The centroid calculated using only the pixel outline set of the cloud in S6 represents the current cloud. Manually screening and shooting complete clouds from photos shot at intervals of seconds, enabling the complete clouds to appear in n continuous pictures (the number of the photos can not be fixed), and respectively using a centroid obtaining method to obtain a piece of data, wherein each piece of data comprises n photos<Cloud centroid longitude, cloud centroid latitude, time of day>The triplet information is recorded as time 1 and time 2 …, and time n, respectively, for each shooting time. The data are collected in m strips (the front m-k strips are taken as training data sets, the back k strips are taken as testing data sets, and k is less than or equal to 15% m). Each piece of data is divided into two groups, the centroid longitude and latitude from time 1 to time n-1 serves as a training input set, the centroid longitude and latitude from time 2 to time n serves as a verification result set, and a cloud movement model is trained. The centroid longitude and latitude from the moment 1 to the moment n-1 of each piece of data of the test data set are used as input, and the model outputs the centroid longitude and latitude predicted at the next moment corresponding to each moment; the training effect is shown in FIG. 4, in the three sets of trajectory prediction of a, b and c, respectively at t0-t10The real centroid position of the cloud (represented by the dots) is recorded at each time instant, after which t is marked in the image1-t10The position of the centroid (represented by a triangle) is predicted at each moment, so that the point location of the track can be predicted accurately, and the prediction result of part of the centroid position coincides with the actual centroid position, thereby proving that the prediction model formed by the method can better complete the prediction task. And finally, calculating the horizontal and vertical coordinate displacement of the centroid, and adding the horizontal and vertical coordinate displacement to each point of the cloud pixel profile set to predict the position of the cloud after moving (wherein the shape change of the cloud is ignored within the default short time).
S8, cloud shadow track prediction, wherein the azimuth angle and the altitude angle of the sun in the sky relative to a fixed place at a certain moment can be accurately calculated according to parameters such as the longitude and latitude where the measuring station is located, the optical declination of the day, the time difference and the time angle, and the predicted track (including the shielding range) of ground cloud shadow movement can be obtained according to projection transformation by combining the cloud movement track in S7 and the contour extracted in S4.
Example 2
In this embodiment, based on the above embodiment, the predicted trajectory is further modified, and the following steps are added in S7:
s71, acquiring satellite cloud pictures in continuous time periods, carrying out grid division on the satellite cloud pictures, determining satellite cloud picture grids which are the same as the shooting area through longitude and latitude, extracting the cloud outline by referring to the method in S4, calculating the plane centroid position of the cloud, forming a training set by the centroid position in the continuous time periods, and establishing a binocular camera shooting overhead cloud movement prediction model;
and S72, matching the clouds through the longitude and latitude of the cloud, establishing a corresponding relation between the satellite cloud picture and each cloud in the shot image, and performing path weighted fitting on the second predicted trajectory obtained by the cloud moving prediction model in the S71 and the predicted trajectory in the embodiment 1 to obtain a corrected predicted trajectory, so that the prediction precision is further improved.
In the description of the present invention, it is to be understood that the terms "left", "right", "upper", "lower", "top", "bottom", "front", "rear", "inner", "outer", "back", "middle", and the like, indicate orientations and positional relationships based on those shown in the drawings, are only for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
However, the above description is only exemplary of the present invention, and the scope of the present invention should not be limited thereby, and the replacement of the equivalent components or the equivalent changes and modifications made according to the protection scope of the present invention should be covered by the claims of the present invention.

Claims (9)

1. A cloud shadow track prediction method based on machine vision is characterized by comprising the following steps:
s1, shooting by using a binocular camera, and collecting a sky image in the area;
s2, preprocessing images, analyzing the sky image obtained in the S1 through a machine learning model, and feeding back the current weather type;
s3, correcting the image, when the weather type fed back in the S2 is little cloud or cloudy, correcting the image of the sky, when the weather type fed back in the S2 is not little cloud or cloudy, not processing the image, stopping the subsequent steps and waiting for the next feedback result;
s4, carrying out cloud contour segmentation, and carrying out cloud contour extraction on the corrected image obtained in the S3;
s5, image matching, namely, carrying out image matching on the binocular image of S4;
s6, performing three-dimensional reconstruction, and obtaining three-dimensional cloud data through parallax computation and geometric transformation according to the matching result of the S5;
s7, cloud moving track prediction, namely, bringing the three-dimensional data in the S6 into a machine learning model to obtain a predicted track of cloud moving in the next time period;
and S8, cloud shadow track prediction, wherein the predicted track of the ground cloud shadow movement is obtained through the cloud moving track projection transformation in the S7.
2. The method of claim 1, wherein the machine learning model in S2 is a residual neural network model, and the residual neural network model is previously trained for meteorological type classification through an image set.
3. The method according to claim 1, wherein the image rectification in the step S3 comprises the following steps:
s31, correcting distortion of the binocular camera;
and S32, correcting the parallel of the binocular camera.
4. The cloud shadow track prediction method based on machine vision according to claim 1, wherein a semantic segmentation algorithm is adopted for cloud contour extraction in S4, and a segmentation algorithm model is built through a U-Net network.
5. The method of claim 1, wherein the feature points of the image matching in S5 are obtained by SIFT algorithm.
6. The method of claim 1, wherein the three-dimensional data in the S6 includes a cloud height, and the cloud height calculating process includes the following steps:
s61, obtaining a depth map of the cloud through parallax calculation of the matched binocular images in the S5, and obtaining a height map of the cloud through coordinate transformation of the depth map;
and S62, selecting the heights of the feature points in the height map for fitting to obtain the average height of the cloud.
7. The method according to claim 1 or 6, wherein the three-dimensional data further comprises cloud longitude and latitude; and according to the contour extracted in the S4, obtaining the cloud center-of-mass coordinates of the cloud through a plane center-of-mass formula, and obtaining the longitude and the latitude of the cloud center-of-mass through geometric conversion.
8. The method according to claim 7, wherein the machine learning model in S7 is a recurrent neural network model; the cyclic neural network model carries out track point numerical value prediction training in advance, a training set consists of a plurality of arrays, and the input quantity of each array comprises the three-dimensional data and meteorological data; the meteorological data comprises but is not limited to one or more combinations of wind speed, wind direction, air pressure and temperature of each set of three-dimensional data corresponding to the moment.
9. The method of claim 7, wherein the step of modifying the predicted trajectory in S7 further comprises:
s71, acquiring satellite cloud pictures at continuous time intervals, and establishing a cloud movement prediction model above the binocular camera shooting;
s72, carrying out weighted fitting on the second predicted trajectory obtained by the cloud moving prediction model in the S71 and the predicted trajectory to obtain the corrected predicted trajectory.
CN202111201206.7A 2021-10-15 2021-10-15 Cloud shadow track prediction method based on machine vision Pending CN113936031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111201206.7A CN113936031A (en) 2021-10-15 2021-10-15 Cloud shadow track prediction method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111201206.7A CN113936031A (en) 2021-10-15 2021-10-15 Cloud shadow track prediction method based on machine vision

Publications (1)

Publication Number Publication Date
CN113936031A true CN113936031A (en) 2022-01-14

Family

ID=79279559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111201206.7A Pending CN113936031A (en) 2021-10-15 2021-10-15 Cloud shadow track prediction method based on machine vision

Country Status (1)

Country Link
CN (1) CN113936031A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821136A (en) * 2022-04-11 2022-07-29 成都信息工程大学 Self-adaptive cloud particle sub-image data processing method
CN114972997A (en) * 2022-06-02 2022-08-30 中民新能宁夏盐池光电能源有限公司 Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821136A (en) * 2022-04-11 2022-07-29 成都信息工程大学 Self-adaptive cloud particle sub-image data processing method
CN114821136B (en) * 2022-04-11 2023-04-21 成都信息工程大学 Self-adaptive cloud microparticle image data processing method
CN114972997A (en) * 2022-06-02 2022-08-30 中民新能宁夏盐池光电能源有限公司 Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image
CN114972997B (en) * 2022-06-02 2024-05-24 中民新能宁夏盐池光电能源有限公司 Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction

Similar Documents

Publication Publication Date Title
CN109416413B (en) Solar energy forecast
Chow et al. Intra-hour forecasting with a total sky imager at the UC San Diego solar energy testbed
Nguyen et al. Stereographic methods for cloud base height determination using two sky imagers
CN109840553B (en) Extraction method and system of cultivated land crop type, storage medium and electronic equipment
CN110514298B (en) Solar radiation intensity calculation method based on foundation cloud picture
CN112766274A (en) Water gauge image water level automatic reading method and system based on Mask RCNN algorithm
WO2015157643A1 (en) Solar energy forecasting
CN113159466B (en) Short-time photovoltaic power generation prediction system and method
CN106485751B (en) Unmanned aerial vehicle photographic imaging and data processing method and system applied to foundation pile detection
CN113936031A (en) Cloud shadow track prediction method based on machine vision
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN112348775B (en) Vehicle-mounted looking-around-based pavement pit detection system and method
CN111723464A (en) Typhoon elliptic wind field parametric simulation method based on remote sensing image characteristics
WO2017193172A1 (en) &#34;solar power forecasting&#34;
CN110569797A (en) earth stationary orbit satellite image forest fire detection method, system and storage medium thereof
CN112801184A (en) Cloud tracking method, system and device
CN111899345B (en) Three-dimensional reconstruction method based on 2D visual image
Dissawa et al. Cross-correlation based cloud motion estimation for short-term solar irradiation predictions
CN112179693A (en) Photovoltaic tracking support fault detection method and device based on artificial intelligence
CN112132900A (en) Visual repositioning method and system
CN115984768A (en) Multi-target pedestrian real-time detection positioning method based on fixed monocular camera
Zhang et al. Intrahour cloud tracking based on optical flow
CN111583298B (en) Short-time cloud picture tracking method based on optical flow method
Arrais et al. Systematic Literature Review on Ground-Based Cloud Tracking Methods for Nowcasting and Short-term Forecasting
CN114972997B (en) Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination