CN110728333B - Sunshine duration analysis method and device, electronic equipment and storage medium - Google Patents

Sunshine duration analysis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110728333B
CN110728333B CN201911315035.3A CN201911315035A CN110728333B CN 110728333 B CN110728333 B CN 110728333B CN 201911315035 A CN201911315035 A CN 201911315035A CN 110728333 B CN110728333 B CN 110728333B
Authority
CN
China
Prior art keywords
image
network model
features
self
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911315035.3A
Other languages
Chinese (zh)
Other versions
CN110728333A (en
Inventor
张超
胡浩
黄聿
赵茜
杨超龙
胡盼盼
佟博
黄仲强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN201911315035.3A priority Critical patent/CN110728333B/en
Publication of CN110728333A publication Critical patent/CN110728333A/en
Application granted granted Critical
Publication of CN110728333B publication Critical patent/CN110728333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a sunshine duration analysis method and device, electronic equipment and a storage medium. The method comprises the steps of obtaining a target image, wherein the target image is a gray level image comprising building information; inputting the target image into a self-coding network model, then extracting the features of the target image based on the self-coding network model, carrying out weighting processing on the extracted features, and then carrying out feature reconstruction on the weighted features to obtain a reference image; and then outputs a reference image. The target image is input into a self-coding network model obtained by pre-training a gray level image carrying building information and marked with sunshine duration, then the target image is subjected to feature extraction based on the self-coding network model, the extracted features are subjected to weighting processing, and the weighted features are reconstructed to obtain a reference image including the image features of the sunshine duration information of the building, so that the speed and the accuracy of sunshine duration analysis are improved.

Description

Sunshine duration analysis method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of sunshine analysis technologies, and in particular, to a sunshine duration analysis method and apparatus, an electronic device, and a storage medium.
Background
Building sunshine is an important subject in building optics by researching the relation between sunshine and a building and the application of the sunshine in the building according to a direct sunlight principle and a sunshine standard. The national regulation that residential buildings must meet the requirement that the minimum sunshine duration in the severe cold days is not less than two hours. The good building sunshine duration can realize making full use of sunshine to satisfy indoor light environment and sanitary requirements, and simultaneously prevent indoor overheating, so that the good sunshine condition is not only a factor for selecting and purchasing house references, but also an important consideration factor for pricing of developers. However, with the continuous advance of urban construction, the urban land utilization is tense, the evolution trend of high-rise urban buildings is accelerated, and some high-rise buildings deprive people of lighting rights, so how to quickly and accurately predict the building sunshine duration so that the constructed house reaches the house sunshine duration standard specified by the regulations becomes a problem to be solved urgently.
Disclosure of Invention
In view of the above problems, the present application proposes a sunshine duration analysis method, apparatus, electronic device, and storage medium to improve the above problems.
In a first aspect, an embodiment of the present application provides a sunshine duration analysis method, where the method includes: acquiring a target image, wherein the target image is a gray level image comprising building information; inputting the target image into a self-coding network model, wherein the self-coding network model is obtained by training a gray image carrying building information and marked with sunshine duration, extracting features of the target image based on the self-coding network model, performing weighting processing on the extracted features, and performing feature reconstruction on the weighted features to obtain a reference image, wherein the reference image comprises image features used for representing the sunshine duration information of the building; and outputting the reference image.
In a second aspect, an embodiment of the present application provides a sunshine duration analysis apparatus, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a target image, and the target image is a gray level image comprising building information; the processing module is used for inputting the target image into a self-coding network model, the self-coding network model is obtained by training a gray level image carrying building information and marked with sunshine duration, the target image is subjected to feature extraction and weighting processing on the extracted features based on the self-coding network model, the weighted features are subjected to feature reconstruction to obtain a reference image, and the reference image comprises image features used for representing the building sunshine duration information of the building; and the output module is used for outputting the reference image.
In a third aspect, an embodiment of the present application provides an electronic device, including one or more processors and a memory; one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of the first aspect described above.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, where the program code executes the method of the first aspect.
The application provides a sunshine duration analysis method and device, electronic equipment and a storage medium, and relates to the technical field of sunshine analysis. The method comprises the steps of obtaining a target image, wherein the target image is a gray level image comprising building information; inputting a target image into a self-coding network model, wherein the self-coding network model is obtained by training a gray image carrying building information and marked with sunshine duration, extracting features of the target image based on the self-coding network model, performing weighting processing on the extracted features, and performing feature reconstruction on the weighted features to obtain a reference image; and then outputs a reference image. According to the method, the target image is input into the self-coding network model obtained by pre-training the gray level image carrying building information and marked with the sunshine duration, so that the target image can be subjected to feature extraction based on the self-coding network model, the extracted features are subjected to weighting processing, the weighted features are reconstructed, and then the reference image comprising the image features of the sunshine duration information of the building is obtained, so that the speed and the accuracy of sunshine duration analysis are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a flowchart of a method for sunshine duration analysis according to an embodiment of the present application.
Fig. 2 shows an example graph of a gray scale map including building information proposed in an embodiment of the present application.
Fig. 3 is a diagram showing an example of a reference image in which sunshine duration information is predicted, which is proposed by an embodiment of the present application.
Fig. 4 is a flowchart of a method for sunshine duration analysis according to another embodiment of the present application.
Fig. 5 shows a schematic structural diagram of a self-coding network model proposed in an embodiment of the present application.
Fig. 6 shows a flowchart of the method of step S230 in fig. 4.
Fig. 7 shows a schematic structural diagram of a feature weighting module proposed in an embodiment of the present application.
Fig. 8 shows a flowchart of the method of step S250 in fig. 4.
Fig. 9 shows a schematic structural diagram of a feature pyramid module proposed in the embodiment of the present application.
Fig. 10 is a flowchart of a method of sunshine duration analysis proposed by a further embodiment of the present application.
Fig. 11 shows a method flowchart of step S310 in fig. 10.
Fig. 12 is a diagram illustrating an example of the building information and sunshine duration data provided in the embodiment of the present application.
Fig. 13 is a block diagram showing the configuration of a sunshine duration analyzing apparatus according to the embodiment of the present application.
Fig. 14 shows a block diagram of the configuration of an electronic device of the present application for executing the sunshine duration analysis method according to the embodiment of the present application.
Fig. 15 is a storage unit of an embodiment of the present application for storing or carrying program code for implementing the sunshine duration analysis method according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Along with the acceleration of the urbanization process and the improvement of the economic development level in recent years, the urban land scale is continuously enlarged, and the problem of land resource scarcity is increasingly highlighted. The increasingly tense urban land causes the gradual expansion of urban development from the plane to the solid, and the intensive utilization of land resources becomes a necessary trend. Improving the building volume ratio of the land is an important measure for saving land resources, and is generally realized by increasing the number of floors and reducing the distance between the floors. However, as the group density of high-rise buildings in residential areas increases, the buildings on some floors cannot meet the standard of the sunshine duration of the houses specified by the regulations. The national regulation that residential buildings have to meet the requirement that the minimum sunshine duration in severe cold days is not less than two hours, so some high-rise buildings deprive people of the light-collecting right.
As a mode, sunshine duration can be obtained by adopting sunshine analysis software based on an AutoCAD platform for analysis, however, with the continuous increase of data volume, the analysis speed of the existing sunshine analysis software is very slow, after a file containing building information is input, the sunshine duration of a building can be calculated only by clicking the operation software for dozens of seconds to several minutes, too much time is consumed for calculation, and the user experience is reduced.
Therefore, the inventors have proposed a sunshine duration analysis method, an apparatus, an electronic device, and a storage medium in the present application in which the analysis speed for improving the sunshine duration is slow.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present application provides a sunshine duration analysis method, which is applicable to an electronic device, and the method includes:
step S110: and acquiring a target image.
The target image may be a grayscale image including building information. Optionally, the building may include one building or a plurality of buildings. The building information can comprise data such as building models of the building, coordinates (including building coordinates) of the building, the number of buildings of the building, the distance between the buildings, the floor height of each building, the floor height of each set of building of the building and the like. It should be noted that the above is only an example of building information, and more or less data may be included in actual implementation, and is not limited herein. The building information of different buildings can be the same or different.
As shown in fig. 2, the target image is a gray scale image including information of a certain building, and the sunshine duration of the corresponding building can be predicted through the gray scale image. Alternatively, the color depth in fig. 2 may indicate the height of the building (building). Wherein a darker color may indicate a lower height of the building, e.g., 2A and 2B in fig. 2 may indicate two buildings of different heights. As one way, the height of a building can be represented by the pixel value of the color block, for example, if the height of a building is 56 meters, the height of the building can be represented by the pixel value 56, and optionally, the color of the color block can be lighter the larger the pixel value is.
When the sunshine duration of a building or a building is required to be predicted, a target image can be obtained. As one way, the target image may be obtained by converting a file including building information into a grayscale image by means of linear transformation.
Step S120: inputting the target image into a self-coding network model, extracting the features of the target image based on the self-coding network model, carrying out weighting processing on the extracted features, and carrying out feature reconstruction on the weighted features to obtain a reference image.
The self-coding network model in the embodiment of the application can be obtained by training a gray level image carrying building information and marked with sunshine duration, and can be used for predicting the sunshine duration of a building. Optionally, the reference image may include an image feature for characterizing sunshine duration information of the building.
As a mode, the acquired target image may be input to a self-coding network model, so that feature extraction is performed on the target image through the self-coding network model obtained through pre-training, weighting processing is performed on the extracted features, feature reconstruction is performed on the weighted features, and a reference image including image features used for representing sunshine duration information of a building is obtained, that is, sunshine duration information of the building is obtained through analysis.
Step S130: and outputting the reference image.
The output form of the reference image may not be limited.
As a mode, the reference image comprising the image characteristics for representing the sunshine duration information of the building can be directly output, so that the predicted sunshine duration information corresponding to the building can be visually and conveniently checked. For example, as shown in fig. 3, is a reference image. Alternatively, each small black block (e.g., 3A) in fig. 3 may be used to represent a building, and the shadow (e.g., 3B) of each small black block may be used to represent the length of the sunshine duration of the building, and as an embodiment, the darker the color of the shadow may represent the shorter the corresponding sunshine duration, and the lighter the color of the shadow may represent the longer the corresponding sunshine duration.
As another mode, a quantitative correspondence relationship between image characteristics representing sunshine duration information of a building and the length of sunshine duration may be predefined. For example, different shadow lengths of the small black blocks in fig. 3 correspond to different long and short levels of sunshine duration. Optionally, if the long and short levels of the sunshine duration are defined to include a level a, a level B and a level C, where if the level a and the level B are compliant sunshine duration levels and the level C is non-compliant sunshine duration levels, it is assumed that the level a > the level B > the level C, and the smaller the level is, the darker the color of the shadow of the corresponding small black block is, as a mode, after obtaining the reference image, the corresponding long and short levels of the sunshine duration may be directly displayed on the reference image, so that whether the sunshine duration of the building is compliant or not may be visually judged from the displayed levels, and the judgment duration may be saved.
Optionally, in the above manner, the length levels of different buildings in the reference image and the corresponding predicted sunshine duration may also be output in the form of a table, a histogram, a pie chart, or the like, so that the result of predicting the sunshine duration may be quickly and conveniently viewed, and user experience may be improved.
According to the sunshine duration analysis method, a target image is obtained, and the target image is a gray image comprising building information; inputting a target image into a self-coding network model, wherein the self-coding network model is obtained by training a gray image carrying building information and marked with sunshine duration, extracting features of the target image based on the self-coding network model, performing weighting processing on the extracted features, and performing feature reconstruction on the weighted features to obtain a reference image; and then outputs a reference image. According to the method, the target image is input into the self-coding network model obtained by pre-training the gray level image carrying building information and marked with the sunshine duration, so that the target image can be subjected to feature extraction based on the self-coding network model, the extracted features are subjected to weighting processing, the weighted features are reconstructed, and then the reference image comprising the image features of the sunshine duration information of the building is obtained, so that the speed and the accuracy of sunshine duration analysis are improved.
Referring to fig. 4, another embodiment of the present application provides a sunshine duration analysis method, which is applicable to an electronic device, and the method includes:
step S210: and acquiring a target image.
Step S220: and inputting the target image into a self-coding network model, and performing feature extraction on the target image based on the first network model to obtain a first feature map.
As one way, please refer to fig. 5, which is a diagram illustrating a structure of a self-coding network model 200 in the present embodiment. As shown in fig. 5, the self-coding network model 200 may include a first network model 201 and a second network model 202. The first Network model 201 may be a Convolutional Neural Network (CNN) that may be used to perform feature extraction on features of an input target image, and the second Network model 202 may be a Transposed Convolutional Neural Network (also called Deconvolution). Optionally, in this embodiment, the first network model 201 and the second network model 202 may be connected in a jump connection manner, so that the reuse of the image features of the target image may be increased.
It should be noted that the specific models of the first network model and the second network model shown in this embodiment are only examples, and do not limit the present application. Alternatively, the first network model 201 and the second network model 202 may be understood as sub-network modules of the self-coding network model 200.
Optionally, the first network model 201 may include a feature weighting module, and the second network model 202 may include a feature pyramid module and an upsampling module. As a way, the target image may be input from the coding network model, in this way, the feature extraction of the target image based on the first network model may be understood as: the convolutional neural network performs feature extraction on the input image, that is, the feature width, the feature height and the feature channel of the image features (for example, the image features may include, but are not limited to, features such as the density among buildings, the number of buildings, the height of buildings and the like) included in the target image are analyzed, so as to obtain a first feature map. Alternatively, the first feature map may be understood as an image including feature channels, feature widths, and feature heights of all image features of the target image.
Step S230: and acquiring the feature weight corresponding to the first feature map.
It should be noted that after the convolutional neural network extracts features of the target image, the obtained features are applied to the next layer of neural network, and the next layer of network model cannot distinguish differences of all features (i.e., validity among the features), that is, in this case, each feature channel in the first feature map extracted by the convolutional neural network has the same importance, which may result in that the sunshine duration of the building cannot be accurately analyzed, or even the sunshine duration cannot be predicted.
As a way to improve the above problem, a feature weight corresponding to the first feature map may be obtained, for example, each feature in the first feature map may be weighted, the weight of the valid feature is increased, and the weight of the secondary feature is decreased, so that the importance of each feature of the first feature map may be distinguished according to the feature weight, which is specifically described as follows:
as one way, as shown in fig. 6, step S230 may include:
step S231: and carrying out global average pooling on the first feature map to obtain a feature vector corresponding to the first feature map.
As shown in fig. 7, which is an exemplary diagram of the working principle of the feature weighting module, as shown in fig. 7, as one way, the first feature map may be subjected to Global average pooling (Global average pooling) processing by the feature weighting module to retain the significant features and reduce feature dimensions, so as to obtain the significance feature vector corresponding to the first feature map. For the specific content and workflow of the global average pooling, reference may be made to the prior art, and details are not described herein.
Step S232: and acquiring a feature weight corresponding to the first feature map based on the feature vector.
After the feature vector corresponding to the first feature map is acquired, the magnitude of the vector value of the feature vector may be taken as the feature weight corresponding to the first feature map.
Optionally, in order to prevent all the feature weights being 0 and thus destroying the original image features of the first feature map, an operation of adding 1 to the values of the feature weights as a whole may be performed, that is, adding 1 to the value of each feature in the first feature map to obtain an updated feature weight.
Step S240: and acquiring a weighted feature map based on the first feature map and the feature weight.
The weighted feature map is understood to be a feature map in which a feature having high significance in the first feature map is given a higher weight to highlight the effect of the feature, and a feature having a lower weight is appropriately given to a secondary feature. As an embodiment, the weighted feature map may be obtained by multiplying the first feature map by a feature weight (here, the updated feature weight). Alternatively, the weighted feature map completed for weighting will be used for the next layer neural network model.
Step S250: and performing feature reconstruction on the features in the weighted feature map based on the second network model to obtain a reference image.
The feature reconstruction can be understood as reordering the features in the weighted feature map and establishing the association between different features and different sunshine durations. The specific description is as follows:
as one way, as shown in fig. 8, step S250 may include:
step S251: and performing upsampling processing on different features in the weighted feature map in a corresponding scale based on the second network model to obtain a first image.
The first image may include a plurality of image features representing sunshine duration information, for example, if the length of sunshine duration is represented by the shade of the shadow color of a building, the first image may include the shadow feature of the building, and specific contents of the plurality of image features are not listed herein.
Please refer to fig. 9, which is an exemplary diagram of the working principle of the feature pyramid module. As shown in fig. 9, as a manner, a corresponding scale may be determined according to the network layer from which each feature in the weighted feature map comes, and the scale is taken as a multiple of the upsampling process (for example, as shown in fig. 9, the upsampling multiples adopted in this embodiment are 2 times upsampling, 4 times upsampling, and 8 times upsampling, respectively), and optionally, the scales of the features corresponding to different network layers are different. Different features in the weighted feature map can be upsampled by corresponding upsampling multiples obtained by transposing the convolutional neural network to obtain a first image. The initialization weight of the transposed convolution kernel can be obtained by bilinear interpolation. Optionally, each network layer of the convolutional neural network performs upsampling processing on the corresponding feature in the weighted feature map by using the corresponding upsampling multiple, and an output image of a previous network layer is used as an input of an adjacent subsequent network layer.
Step S252: and carrying out fusion processing on the plurality of image features of the first image to obtain the reference image.
As one way, a plurality of image features of the first image may be feature-fused in an additive manner to construct a pyramid as shown in fig. 9, resulting in a reference image.
Step S260: and outputting the reference image.
According to the sunshine duration analysis method, the target image is input into the self-coding network model obtained by pre-training the gray level image carrying building information and marked with sunshine duration, so that the target image can be subjected to feature extraction and weighting processing on the extracted features based on the self-coding network model, the weighted features are reconstructed, then the reference image including the image features of the sunshine duration information of the building is obtained, and the speed and accuracy of sunshine duration analysis are improved.
Referring to fig. 10, another embodiment of the present application provides a sunshine duration analysis method, which is applicable to an electronic device, and the method includes:
step S310: a training sample set is obtained.
In this embodiment, the training sample set includes a grayscale image carrying building information and labeled with sunshine duration. The training sample set can be used for training a neural network model to obtain a self-coding network model which can be used for predicting the sunshine duration of the building.
As one way, as shown in fig. 11, step S310 may include:
step S311: and acquiring a text file corresponding to the building information.
The file may be text data, for example, building coordinate data. The text files for different buildings may be different. Optionally, the text file corresponding to the building information may be acquired in various ways.
As an implementation manner, the text file corresponding to the building information may be obtained in a manual measurement manner, and optionally, the obtained text file may be stored in an Excel table manner, so that the text file may be directly imported into image processing software for subsequent conversion processing.
As another implementation, the text file corresponding to the building information may be obtained from a known building model, for example, the text file corresponding to the building information may be derived from the BIM model.
Step S312: and acquiring a label image matched with the text file, wherein the label image is labeled with building information and sunshine duration information.
As one mode, the image marked with sunshine information may be obtained by inputting a text file into standard sunshine prediction software and marking the image based on the text file data, and optionally, the image marked with sunshine information may be used as a marked image matched with the text file, for example, as shown in fig. 12, the marked image is an example diagram of a certain building, where 31F represents the floor height of the building, and other values around the building may represent sunshine duration information.
It should be noted that the labeled image is labeled with building information and sunshine duration information.
Step S313: and converting the marked image into a gray image to obtain a training sample set.
As one way, bilinear interpolation processing may be performed on the annotation image to convert the annotation image into a grayscale image. Alternatively, a large number of acquired grayscale images may be used as a training sample set.
Step S320: and inputting the training sample set into a machine learning model, and training the machine learning model to obtain a self-coding network model.
Optionally, in order to make the training more standard and reliable, before the training sample set is input to the machine learning model, the grayscale image may be randomly clipped into an image of 256 × 256 pixels, the clipped image is input to the machine learning model, and the machine learning model is trained until the model converges, so that the self-coding network model may be obtained.
As a specific implementation manner, in the process of inputting a gray image into a machine learning model for training, firstly, features of the input gray image are extracted, and then, the extracted features are weighted in a feature weighting manner, so that the discrimination and effectiveness among different features are improved; the features obtained after the weighting process are used for feature reconstruction. Specifically, the feature map obtained after weighting processing may be subjected to upsampling processing by using a transposed convolutional neural network, where the number of times of upsampling may be multiple, and meanwhile, a pyramid network is used to perform upsampling processing on the corresponding features of the feature map in combination with multiple scales (that is, sampling different upsampling times can improve the capability of a model to predict the sunshine duration), and then different features after long-time sampling processing are subjected to feature fusion in an addition manner, so as to obtain a reconstructed image. Optionally, the reconstructed image is an output image of the model.
As one way, in order to improve the accuracy of the prediction, after the self-coding network model is obtained, the self-coding network model may be updated based on a preset loss parameter calculation rule. Optionally, the preset loss parameter calculation rule may be:
Figure 941056DEST_PATH_IMAGE001
wherein,
Figure 301630DEST_PATH_IMAGE002
in order to be the target image,
Figure 875700DEST_PATH_IMAGE003
for the purpose of a reference picture,
Figure 245501DEST_PATH_IMAGE004
a gray-scale image marked with sunshine duration,
Figure 136097DEST_PATH_IMAGE005
as the weight parameter, the weight value of the user,Lossis a loss parameter. Wherein,
Figure 933152DEST_PATH_IMAGE005
the specific numerical value of (a) may be set according to actual conditions, and is not limited herein.
As an implementation, the method can be based on a formula
Figure 541987DEST_PATH_IMAGE001
Obtaining loss parametersLossBased on the loss parameterLossAnd updating the self-coding network model obtained by the training. Optionally, the calculated loss parameter may be usedLossThe method is used for back propagation so as to update the model parameters of the self-coding network model until the model converges, and the self-coding network model with high prediction accuracy can be obtained.
Step S330: and acquiring a target image.
Step S340: inputting the target image into the self-coding network model, extracting the features of the target image based on the self-coding network model, carrying out weighting processing on the extracted features, and carrying out feature reconstruction on the weighted features to obtain a reference image.
Step S350: and outputting the reference image.
The sunshine duration analysis method obtains a self-coding network model through pre-training of a gray level image carrying building information and marked with sunshine duration, inputs a target image into the self-coding network model, enables the target image to be subjected to feature extraction based on the self-coding network model, performs weighting processing on the extracted features, reconstructs the features after the weighting processing, obtains a reference image of image features including the sunshine duration information of the building, and accelerates the sunshine duration prediction.
Referring to fig. 13, an sunshine duration analyzing apparatus 400 operating in an electronic device is provided in an embodiment of the present application, where the apparatus 400 includes:
the first obtaining module 410 is configured to obtain a target image, where the target image is a grayscale image including building information.
The apparatus 400 may further include a training sample set acquisition module and a model training module, as one approach. The training sample set acquisition module can be used for acquiring a training sample set before acquiring a target image, and optionally, the training sample set comprises a gray level image which carries the building information and is marked with sunshine duration. The model training module can be used for inputting the obtained training sample set into a machine learning model before obtaining the target image, and training the machine learning model to obtain a self-coding network model.
Wherein, acquiring the training sample set specifically may include: acquiring a text file corresponding to the building information; acquiring a label image matched with the text file, wherein the label image is labeled with building information and sunshine duration information; and converting the marked image into a gray image to obtain a training sample set.
Optionally, the apparatus 400 may further include a model updating module, where the model updating module may be configured to update the self-coding network model based on a preset loss parameter calculation rule after the training sample set is input to the machine learning model and the machine learning model is trained to obtain the self-coding network model.
As an implementation, updating the self-coding network model based on the preset loss parameter calculation rule may specifically be used for formula-based updating
Figure 715480DEST_PATH_IMAGE001
Obtaining a loss parameter, wherein,
Figure 663844DEST_PATH_IMAGE002
for the purpose of the said target image,
Figure 631800DEST_PATH_IMAGE003
for the purpose of the reference image, the reference image is,
Figure 727932DEST_PATH_IMAGE004
for the gray scale image labeled with sunshine duration,
Figure 439536DEST_PATH_IMAGE005
is a weight parameter; updating the self-coding network model based on the loss parameter.
The processing module 420 is used for inputting the target image into a self-coding network model, the self-coding network model is obtained by carrying the building information and the gray level image marked with sunshine duration, the self-coding network model is used for carrying out feature extraction on the target image and carrying out weighting processing on the extracted features, the features after weighting processing are subjected to feature reconstruction to obtain a reference image, and the reference image comprises image features used for representing the sunshine duration information of the building.
Optionally, the self-coding network model includes a first network model and a second network model. As one mode, performing feature extraction on the target image based on the self-coding network model, performing weighting processing on the extracted features, and performing feature reconstruction on the weighted features to obtain a reference image may include: extracting the features of the target image based on the first network model to obtain a first feature map; acquiring a feature weight corresponding to the first feature map; acquiring a weighted feature map based on the first feature map and the feature weight; and performing feature reconstruction on the features in the weighted feature map based on the second network model to obtain a reference image.
Wherein the obtaining of the feature weight corresponding to the first feature map may include: carrying out global average pooling on the first feature map to obtain a feature vector corresponding to the first feature map; and acquiring a feature weight corresponding to the first feature map based on the feature vector. Performing feature reconstruction on the features in the weighted feature map based on the second network model, and obtaining a reference image may include: carrying out upsampling processing on different features in the weighted feature map in a corresponding scale based on the second network model to obtain a first image, wherein the first image comprises a plurality of image features representing sunshine duration information; and carrying out fusion processing on the plurality of image features of the first image to obtain the reference image.
As an implementation manner, the first network model in this embodiment may include a feature weighting module, and the second network model may include a feature pyramid module.
An output module 430, configured to output the reference image.
According to the sunshine duration analysis device, the target image is obtained and is a gray image comprising building information; inputting a target image into a self-coding network model, wherein the self-coding network model is obtained by training a gray image carrying building information and marked with sunshine duration, extracting features of the target image based on the self-coding network model, performing weighting processing on the extracted features, and performing feature reconstruction on the weighted features to obtain a reference image; and then outputs a reference image. According to the method, the target image is input into the self-coding network model obtained by pre-training the gray level image carrying building information and marked with the sunshine duration, so that the target image can be subjected to feature extraction based on the self-coding network model, the extracted features are subjected to weighting processing, the weighted features are reconstructed, and then the reference image comprising the image features of the sunshine duration information of the building is obtained, so that the speed and the accuracy of sunshine duration analysis are improved.
It should be noted that the device embodiment and the method embodiment in the present application correspond to each other, and specific principles in the device embodiment may refer to the contents in the method embodiment, which is not described herein again.
An electronic device provided by the present application will be described below with reference to fig. 14.
Referring to fig. 14, based on the sunshine duration analysis method and apparatus, another electronic device 100 capable of executing the sunshine duration analysis method is further provided in the embodiment of the present application. Electronic device 100 includes one or more processors 102 (only one shown), memory 104, and processing module 106 coupled to each other. The memory 104 stores therein a program that can execute the contents of the foregoing embodiments, and the processor 102 can execute the program stored in the memory 104, where the memory 104 includes the apparatus 400 described in the foregoing embodiments.
Processor 102 may include one or more processing cores, among other things. The processor 102 interfaces with various components throughout the electronic device 100 using various interfaces and circuitry to perform various functions of the electronic device 100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a video Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, a video image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The data storage area may also store data created by the electronic device 100 during use (e.g., phone book, audio-video data, chat log data), and the like.
The processing module 106 is configured to input a target image into a self-coding network model, the self-coding network model is obtained by training a gray-scale image carrying building information and labeled with sunshine duration, perform feature extraction on the target image based on the self-coding network model, perform weighting processing on the extracted features, perform feature reconstruction on the weighted features, and obtain a reference image, where the reference image includes image features used for representing sunshine duration information of the building.
Referring to fig. 15, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 500 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 500 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 500 includes a non-volatile computer-readable storage medium. The computer readable storage medium 500 has storage space for program code 510 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 510 may be compressed, for example, in a suitable form.
According to the sunshine duration analysis method, the sunshine duration analysis device, the electronic equipment and the storage medium, the target image is obtained and is a gray image comprising building information; inputting a target image into a self-coding network model, wherein the self-coding network model is obtained by training a gray image carrying building information and marked with sunshine duration, extracting features of the target image based on the self-coding network model, performing weighting processing on the extracted features, and performing feature reconstruction on the weighted features to obtain a reference image; and then outputs a reference image. According to the method, the target image is input into the self-coding network model obtained by pre-training the gray level image carrying building information and marked with the sunshine duration, so that the target image can be subjected to feature extraction based on the self-coding network model, the extracted features are subjected to weighting processing, the weighted features are reconstructed, and then the reference image comprising the image features of the sunshine duration information of the building is obtained, so that the speed and the accuracy of sunshine duration analysis are improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A sunshine duration analysis method is characterized by comprising the following steps:
acquiring a training sample set, wherein the training sample set comprises a gray level image carrying building information and marked with sunshine duration;
inputting the training sample set into a machine learning model, and training the machine learning model to obtain a self-coding network model;
updating the self-coding network model based on a preset loss parameter calculation rule;
acquiring a target image, wherein the target image is a gray level image comprising the building information;
inputting the target image into the self-coding network model, wherein the self-coding network model is obtained by training a gray level image carrying building information and the sunshine duration marked, extracting features of the target image based on the self-coding network model, performing weighting processing on the extracted features, and performing feature reconstruction on the weighted features to obtain a reference image, wherein the reference image comprises image features used for representing the sunshine duration information of the building;
and outputting the reference image.
2. The method according to claim 1, wherein the self-coding network model includes a first network model and a second network model, and the extracting features of the target image and weighting the extracted features based on the self-coding network model, and performing feature reconstruction on the weighted features to obtain the reference image comprises:
extracting the features of the target image based on the first network model to obtain a first feature map;
acquiring a feature weight corresponding to the first feature map;
acquiring a weighted feature map based on the first feature map and the feature weight;
and performing feature reconstruction on the features in the weighted feature map based on the second network model to obtain a reference image.
3. The method of claim 2, wherein the obtaining the feature weight corresponding to the first feature map comprises:
carrying out global average pooling on the first feature map to obtain a feature vector corresponding to the first feature map;
and acquiring a feature weight corresponding to the first feature map based on the feature vector.
4. The method according to claim 2, wherein the performing feature reconstruction on the features in the weighted feature map based on the second network model to obtain a reference image comprises:
carrying out upsampling processing on different features in the weighted feature map in a corresponding scale based on the second network model to obtain a first image, wherein the first image comprises a plurality of image features representing sunshine duration information;
and carrying out fusion processing on the plurality of image features of the first image to obtain the reference image.
5. The method of claim 2, wherein the first network model comprises a feature weighting module and the second network model comprises a feature pyramid module.
6. The method of claim 1, wherein the obtaining a training sample set comprises:
acquiring a text file corresponding to the building information;
acquiring a label image matched with the text file, wherein the label image is labeled with building information and sunshine duration information;
and converting the marked image into a gray image to obtain a training sample set.
7. The method of claim 1, wherein updating the self-coding network model based on the pre-set loss parameter calculation rule comprises:
based on the formula
Figure 603649DEST_PATH_IMAGE001
Obtaining a loss parameter, wherein,
Figure 520790DEST_PATH_IMAGE002
for the purpose of the said target image,
Figure 300527DEST_PATH_IMAGE003
for the purpose of the reference image, the reference image is,
Figure 164578DEST_PATH_IMAGE004
for the gray scale image labeled with sunshine duration,
Figure 978950DEST_PATH_IMAGE005
is a weight parameter;
updating the self-coding network model based on the loss parameter.
8. An sunshine duration analyzing apparatus, characterized in that the apparatus comprises:
the training sample set acquisition module is used for acquiring a training sample set, and the training sample set comprises a gray level image carrying building information and marked with sunshine duration;
the model training module is used for inputting the training sample set into a machine learning model and training the machine learning model to obtain a self-coding network model;
the model updating module is used for updating the self-coding network model based on a preset loss parameter calculation rule;
the first acquisition module is used for acquiring a target image, wherein the target image is a gray level image comprising the building information;
the processing module is used for inputting the target image into the self-coding network model, the self-coding network model is obtained by training a gray level image carrying the building information and the sunshine duration, feature extraction is carried out on the target image based on the self-coding network model, weighting processing is carried out on the extracted features, feature reconstruction is carried out on the weighted features, and a reference image is obtained, wherein the reference image comprises image features used for representing the sunshine duration information of the building;
and the output module is used for outputting the reference image.
9. An electronic device, comprising a memory;
one or more processors;
one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, having program code stored therein, wherein the program code when executed by a processor performs the method of any of claims 1-7.
CN201911315035.3A 2019-12-19 2019-12-19 Sunshine duration analysis method and device, electronic equipment and storage medium Active CN110728333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911315035.3A CN110728333B (en) 2019-12-19 2019-12-19 Sunshine duration analysis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911315035.3A CN110728333B (en) 2019-12-19 2019-12-19 Sunshine duration analysis method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110728333A CN110728333A (en) 2020-01-24
CN110728333B true CN110728333B (en) 2020-06-12

Family

ID=69226462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911315035.3A Active CN110728333B (en) 2019-12-19 2019-12-19 Sunshine duration analysis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110728333B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298918B (en) * 2020-02-24 2022-12-27 广东博智林机器人有限公司 Different color display method and device for overlapped area
CN111428291B (en) * 2020-03-12 2023-04-28 深圳小库科技有限公司 Method and device for measuring and calculating sunlight condition of building in real time
CN113496078A (en) * 2020-04-07 2021-10-12 阿里巴巴集团控股有限公司 Data analysis method and device, electronic equipment and storage medium
CN113900519A (en) * 2021-09-30 2022-01-07 Oppo广东移动通信有限公司 Method and device for acquiring fixation point and electronic equipment
CN114693513B (en) * 2022-03-28 2024-03-26 西南石油大学 House sunshine analysis method based on visual image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102721988A (en) * 2012-06-14 2012-10-10 中国气象科学研究院 Sunshine duration measuring method based on sky visible-light images
CN105607157A (en) * 2016-02-03 2016-05-25 东南大学 City near-surface layer thermal environment multi-point instant sampling measurement method
CN108664953A (en) * 2018-05-23 2018-10-16 清华大学 A kind of image characteristic extracting method based on convolution self-encoding encoder model
CN109934902A (en) * 2019-03-13 2019-06-25 南京大学 A kind of gradient field rendering image reconstructing method of usage scenario feature constraint
CN110174714A (en) * 2019-05-24 2019-08-27 南京大学 Street spacial sight sunshine time mass measurement method and system based on machine learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108613411A (en) * 2016-12-21 2018-10-02 北京兆阳能源技术有限公司 A kind of solar-energy light collector and building or structure structure using the device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102721988A (en) * 2012-06-14 2012-10-10 中国气象科学研究院 Sunshine duration measuring method based on sky visible-light images
CN105607157A (en) * 2016-02-03 2016-05-25 东南大学 City near-surface layer thermal environment multi-point instant sampling measurement method
CN108664953A (en) * 2018-05-23 2018-10-16 清华大学 A kind of image characteristic extracting method based on convolution self-encoding encoder model
CN109934902A (en) * 2019-03-13 2019-06-25 南京大学 A kind of gradient field rendering image reconstructing method of usage scenario feature constraint
CN110174714A (en) * 2019-05-24 2019-08-27 南京大学 Street spacial sight sunshine time mass measurement method and system based on machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种用于日照透射强度检测方法的改进研究;毕明岩;《计算机仿真》;20160531;第33卷(第5期);第435-438页 *

Also Published As

Publication number Publication date
CN110728333A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN110728333B (en) Sunshine duration analysis method and device, electronic equipment and storage medium
CN112801164B (en) Training method, device, equipment and storage medium of target detection model
US11270497B2 (en) Object loading method and apparatus, storage medium, and electronic device
CN109117760B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN110363753B (en) Image quality evaluation method and device and electronic equipment
CN111062854B (en) Method, device, terminal and storage medium for detecting watermark
CN112989995B (en) Text detection method and device and electronic equipment
CN113177451A (en) Training method and device of image processing model, electronic equipment and storage medium
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN117557708A (en) Image generation method, device, storage medium and computer equipment
CN115953330B (en) Texture optimization method, device, equipment and storage medium for virtual scene image
CN112734900A (en) Baking method, baking device, baking equipment and computer-readable storage medium of shadow map
CN113808192B (en) House pattern generation method, device, equipment and storage medium
CN113222843B (en) Image restoration method and related equipment thereof
CN112085636B (en) Urban functional shrinkage analysis method, device and storage medium
CN104423964A (en) Method and system used for determining visualization credibility
CN111581808B (en) Pollutant information processing method and device, storage medium and terminal
CN113963011A (en) Image recognition method and device, electronic equipment and storage medium
CN113505844A (en) Label generation method, device, equipment, storage medium and program product
CN113408571A (en) Image classification method and device based on model distillation, storage medium and terminal
CN113010946B (en) Data analysis method, electronic equipment and related products
CN115439845B (en) Image extrapolation method and device based on graph neural network, storage medium and terminal
CN111461148A (en) Building acceptance analysis method
CN117649358B (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant