CN111785094A - Advection fog detection method and device, computer equipment and readable storage medium - Google Patents
Advection fog detection method and device, computer equipment and readable storage medium Download PDFInfo
- Publication number
- CN111785094A CN111785094A CN202010759990.2A CN202010759990A CN111785094A CN 111785094 A CN111785094 A CN 111785094A CN 202010759990 A CN202010759990 A CN 202010759990A CN 111785094 A CN111785094 A CN 111785094A
- Authority
- CN
- China
- Prior art keywords
- area
- target
- preset
- region
- advection fog
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0073—Surveillance aids
- G08G5/0091—Surveillance aids for monitoring atmospheric conditions
Landscapes
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Processing (AREA)
Abstract
The application relates to a advection fog detection method, a device, computer equipment and a readable storage medium. The method comprises the following steps: acquiring environmental data and video data of a target area, and detecting whether the environmental data meets a preset condition, wherein the preset condition is related to meteorological factors formed by advection fog; if the environmental data meet the preset conditions, detecting whether a target image corresponding to the target area comprises advection fog, wherein the target image is any one of a plurality of video frames included in the video data; and if the target image comprises advection fog, acquiring the visibility value of the target area according to the video data. The method can be used for simply and accurately detecting the advection fog.
Description
Technical Field
The present application relates to the field of meteorological detection technologies, and in particular, to a method and an apparatus for advection fog detection, a computer device, and a readable storage medium.
Background
An advection fog (advection fog) is a fog that forms when warm humid air advects onto the cooler underlying surface, with the lower portion cooling. The advection fog mostly occurs in winter and spring seasons, the northern coastal areas are abundant, the advection fog is strongly related to the horizontal flow of air, the fog can last for a long time only if the wind is continuously blown, and if the wind stops, the source of warm and humid air is interrupted, and the fog can be quickly dissipated.
The advection fog is one of the main weather phenomena endangering aviation safety, and the advection fog with wide range of thickness seriously obstructs the landing and the takeoff of the aircraft and even can cause flight accidents. For example, under the influence of advection fog, an aircraft cannot take off when airport visibility is below 350 meters, an aircraft cannot land when airport visibility is below 500 meters, and an aircraft cannot even glide if airport visibility is below 50 meters.
Therefore, how to simply and accurately detect the advection fog and output reliable advection fog detection data as a reference basis for safety control (such as aviation safety control) becomes a problem to be solved at present.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device, and a readable storage medium for detecting the advection fog, which can simply and accurately detect the advection fog.
In a first aspect, an embodiment of the present application provides a method for detecting advection fog, where the method includes:
acquiring environmental data and video data of a target area, and detecting whether the environmental data meets a preset condition, wherein the preset condition is related to meteorological factors formed by advection fog;
if the environmental data meet the preset conditions, detecting whether a target image corresponding to the target area comprises advection fog, wherein the target image is any one of a plurality of video frames included in the video data;
and if the target image comprises advection fog, acquiring the visibility value of the target area according to the video data.
In one embodiment, the obtaining the visibility value of the target area according to the video data includes:
acquiring a binary image corresponding to the target area according to the plurality of video frames included in the video data, wherein the binary image includes a first area and a second area, the first area corresponds to a sky area, the second area corresponds to a sea area, pixel values of all pixel points in the first area are first values, pixel values of all pixel points in the second area are second values, and the first values and the second values are different;
obtaining an area ratio of the first region and the second region, the area ratio being used to characterize a clarity of a sea-sky boundary between the sky region and the sea-surface region;
and searching the visibility value of the target area corresponding to the area ratio in a preset visibility mapping table according to the area ratio.
In one embodiment, the obtaining a binary image corresponding to the target region according to the plurality of video frames included in the video data includes:
performing matrix operation on the plurality of video frames to obtain a target frame;
comparing the original pixel value of each pixel point of the target frame with a preset pixel threshold value;
and according to the comparison result, carrying out binarization processing on the original pixel value of each pixel point of the target frame to obtain the binary image.
In one embodiment, the obtaining the area ratio of the first region and the second region includes:
traversing pixel values of all pixel points in the binary image, and determining the position coordinates of the first region and the second region in the binary image;
calculating the area of the first region and the area of the second region according to each of the position coordinates;
and calculating the ratio of the area of the first region to the area of the second region to obtain the area ratio.
In one embodiment, the detecting whether the target image corresponding to the target area includes advection fog includes:
inputting the target image into an advection fog recognition model to obtain a recognition result, wherein the recognition result is used for representing that the target image comprises advection fog or the target image does not comprise advection fog, and the advection fog recognition model is obtained by training a plurality of advection fog sample images at different historical moments.
In one embodiment, the environmental data includes at least one of wind speed, wind direction, temperature, and air humidity corresponding to the target area; under the circumstance that the environmental data include the wind speed, the wind direction, the temperature and the air humidity corresponding to the target area, detecting whether the environmental data satisfy a preset condition includes:
detecting whether the wind speed is within a preset wind speed range, detecting whether the wind direction is a preset wind direction, detecting whether the temperature is smaller than a preset temperature threshold value and detecting whether the air humidity is larger than a preset humidity threshold value;
and if the wind speed is within the preset wind speed range, the wind direction is the preset wind direction, the temperature is smaller than the preset temperature threshold value, and the air humidity is larger than the preset humidity threshold value, determining that the environmental data meets the preset condition.
In one embodiment, the video data is specifically acquired by a high-sensitivity imaging component; the method further comprises the following steps:
measuring a target brightness value corresponding to the target area through a luminance meter of a light instrument;
searching a preset parameter adjustment table according to the target brightness value to obtain a target imaging parameter corresponding to the target brightness value, wherein the parameter adjustment table comprises a mapping relation between each brightness value and each imaging parameter;
and setting the target imaging parameters as the working parameters of the high-sensitivity imaging assembly.
In a second aspect, an embodiment of the present application provides an advection fog detection apparatus, including:
the processing module is used for acquiring environmental data and video data of a target area and detecting whether the environmental data meet preset conditions, wherein the preset conditions are related to meteorological factors formed by advection fog;
a detection module, configured to detect whether a target image corresponding to the target area includes advection fog if the environmental data meets the preset condition, where the target image is any one of a plurality of video frames included in the video data;
and the obtaining module is used for obtaining the visibility value of the target area according to the video data if the target image comprises advection fog.
In a third aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the method according to the first aspect as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the method comprises the steps of acquiring environmental data and video data of a target area, and detecting whether the environmental data meet preset conditions, wherein the preset conditions are related to meteorological factors formed by advection fog; if the environmental data meet the preset conditions, detecting whether a target image corresponding to the target area comprises advection fog, wherein the target image is any one of a plurality of video frames included in the video data; if the target image comprises advection fog, acquiring a visibility value of the target area according to the video data; therefore, the preset condition is related to meteorological factors formed by the advection fog, if the environmental data of the target area meets the preset condition, the advection fog is easily formed in the current environment representing the target area, and whether the target image corresponding to the target area comprises the advection fog or not is detected under the condition, so that the accuracy of detecting the advection fog can be improved; according to the method for detecting the advection fog, the detection of the advection fog can be conveniently realized only according to the environment data and the video data of the target area which are easy to obtain without depending on complex implementation environments such as a remote sensing satellite, and the detection difficulty of the advection fog is reduced.
Drawings
FIG. 1 is a schematic flow diagram of a method for advection fog detection in one embodiment;
FIG. 2 is a diagram illustrating a detailed step of step S300 in another embodiment;
FIG. 3 is a diagram illustrating a detailed step of step S310 in another embodiment;
FIG. 4 is a diagram illustrating a detailed step of step S320 in another embodiment;
FIG. 5 is a schematic diagram of a partial refinement of step S100 in another embodiment;
FIG. 6 is a schematic flow chart of a method for advection fog detection in another embodiment;
FIG. 7 is a block diagram showing the structure of an advection fog detecting apparatus in one embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that in the advection fog detection method provided in the embodiment of the present application, an execution main body of the advection fog detection method may be an advection fog detection apparatus, the advection fog detection apparatus may be implemented as a part or all of a computer device in a software, hardware, or a combination of software and hardware, and the computer device may be a server. In the following method embodiments, the execution subject is a computer device as an example. It can be understood that the advection fog detection method provided by the following method embodiments may also be applied to a terminal, may also be applied to a system including the terminal and a server, and is implemented through interaction between the terminal and the server.
In one embodiment, as shown in fig. 1, there is provided a method of advection fog detection, comprising the steps of:
step S100, acquiring the environmental data and the video data of the target area, and detecting whether the environmental data meets a preset condition.
In the embodiment of the present invention, the target area may be an area where detection of the advection fog is required, and may be an area where the advection fog is easily generated, such as an airport built at sea. The computer device obtains the environmental data and the video data of the target area, and as an embodiment, the video data may be collected by the computer device through an imaging component, the environmental data of the target area obtained by the computer device may be obtained through an associated sensor, for example, the computer device may collect the wind direction or wind speed of the target area through an anemometer, or collect the temperature and air humidity of the target area through a hygrothermograph, and so on.
In the embodiment of the present application, the preset condition is related to a meteorological factor for forming the advection fog, and the preset condition may include a meteorological factor for easily forming the advection fog. For example, the advection fog is formed by advecting warm and humid air to a cooler underlying surface and cooling the lower part, the advection fog is easy to generate under the conditions of higher air humidity and lower air temperature, and the generation of the advection fog needs proper wind speed which is generally 2-7 m/s; the computer device may then set the preset condition to a range of wind speeds, a temperature threshold or a humidity threshold, etc., at which advection fog is likely to form.
The computer device detects whether the environmental data satisfies a preset condition, for example, may detect whether a wind speed included in the environmental data is within a wind speed range in which advection fog is easily formed, or the like. Therefore, the computer equipment can determine whether the current environment of the target area represented by the environment data is easy to form advection fog or not by detecting whether the environment data meets the preset condition or not.
Step S200, if the environmental data meet the preset conditions, whether the target image corresponding to the target area comprises advection fog is detected.
If the computer equipment detects that the environmental data meet the preset conditions, representing that the current environment of the target area is easy to form advection fog; under the condition that the environmental data meet the preset conditions, the computer equipment detects whether the target image corresponding to the target area comprises the advection fog or not, and therefore accuracy of the advection fog detection can be improved.
In the embodiment of the present application, the target image is any one of a plurality of video frames included in video data. The computer device may acquire video data of the target area within a preset time duration through the imaging component, for example, the video data within 1s may be acquired, if a frame rate of the video data is 30, the video data includes 30 video frames, and the computer device randomly extracts one frame from a plurality of video frames included in the video data to obtain the target image.
As an embodiment, the computer device detects whether the target image corresponding to the target area includes advection fog, and may perform the following step a 1:
and step A1, inputting the target image into the advection fog recognition model to obtain a recognition result.
The advection fog recognition model is obtained by training a plurality of advection fog sample images at different historical moments, and the recognition result is used for representing that the target image comprises advection fog or the target image does not comprise advection fog.
In the embodiment of the application, the computer equipment can obtain a plurality of advection fog sample images at different historical moments, wherein the plurality of advection fog sample images can comprise positive samples and negative samples; wherein the positive examples include advection fog, the negative examples do not include advection fog, and both the positive and negative examples have corresponding category labels. The computer device trains an initial advection fog recognition model by adopting a plurality of advection fog sample images at different historical moments, and obtains the advection fog recognition model after training. The initial advection fog recognition model may be a classification network model such as a Resnet residual error network, a VGG network, etc., and is not particularly limited herein.
And inputting the target image into the trained advection fog recognition model by the computer equipment, and obtaining a recognition result that the target image comprises the advection fog or the target image does not comprise the advection fog.
Step S300, if the target image comprises advection fog, the visibility value of the target area is obtained according to the video data.
The visibility is an index reflecting the atmospheric transparency, and if the computer equipment detects that the target image comprises advection fog, the computer equipment acquires the visibility value of the target area according to the video data corresponding to the target image.
In a possible embodiment, since advection fog generally occurs at sea, the imaging assembly may be deployed towards the sea, and the computer device may acquire visibility values of the target area by the degree of definition of sea-sky boundaries in a plurality of video frames comprised by the video data. As an embodiment, when the imaging component is deployed, the sea-sky boundary can be kept at the middle position of the video frame by fixing the position of the imaging component, and it can be understood that the sea-sky boundary is clearly visible in the case of no advection fog, and the area of the sky region and the area of the sea surface region in the video frame are equal, in which case the visibility value is high; if advection fog is generated, the sea-sky boundary is fuzzy, the area of a sky region in a video frame is increased, the area of a sea surface region is correspondingly reduced, and the corresponding visibility value is lower under the condition.
In another possible embodiment, the computer device may further determine the visibility value of the target area by the degree of clarity with which the target object in the target area is imaged in a plurality of video frames comprised in the video data. It is understood that the clearer the target object in the target area is imaged in the video frame, the higher the visibility value of the target area is, and conversely, the lower the visibility value of the target area is.
Therefore, the computer equipment can output a reliable visibility result for reference, and is favorable for reducing the adverse effect of advection fog on life and travel of people. For example, the output visibility value can be used as a reference for aviation safety, so that people can adjust the flight time and the flight path of the aircraft by combining the visibility value of the target area, and the flight safety of the aircraft is improved.
The method comprises the steps of obtaining environmental data and video data of a target area, and detecting whether the environmental data meet preset conditions, wherein the preset conditions are related to meteorological factors formed by advection fog; if the environmental data meet the preset conditions, detecting whether a target image corresponding to the target area comprises advection fog, wherein the target image is any one of a plurality of video frames included in the video data; if the target image comprises advection fog, acquiring a visibility value of a target area according to the video data; therefore, the preset condition is related to meteorological factors formed by the advection fog, if the environmental data of the target area meets the preset condition, the advection fog is easily formed in the current environment representing the target area, and whether the target image corresponding to the target area comprises the advection fog or not is detected under the condition, so that the accuracy of detecting the advection fog can be improved; according to the advection fog detection method, the detection of the advection fog can be conveniently realized only according to the easily acquired environment data and video data of the target area without depending on complex implementation environments such as a remote sensing radar satellite, and the detection difficulty of the advection fog is reduced.
In one embodiment, on the basis of the embodiment shown in fig. 1, referring to fig. 2, this embodiment relates to a process of how a computer device obtains visibility values of a target area according to video data. As shown in fig. 2, the process includes step S310, step S320, and step S330:
step S310, obtaining a binary image corresponding to the target area according to a plurality of video frames included in the video data.
The binary image comprises a first area and a second area, the first area corresponds to a sky area, the second area corresponds to a sea area, pixel values of all pixel points in the first area are all first values, pixel values of all pixel points in the second area are all second values, and the first values are different from the second values.
In this embodiment, the imaging assembly may be fixedly deployed toward the sea, the first region may be a sky region, the second region may be a sea surface region, and the gray value of the sky is higher than the gray value of the sea surface, the sky region belongs to a bright region, and the sea surface region belongs to a dark region, so that the pixel values of the pixels in the first region in the binary image are all 255, that is, the first value is 255, and the pixel values of the pixels in the second region in the binary image are all 0, that is, the second value is 0.
In one possible implementation of step S310, referring to fig. 3, step S310 includes step S3101, step S3102 and step S3103:
step S3101, a matrix operation is performed on the plurality of video frames to obtain a target frame.
Taking the example that the video data includes 30 video frames, the computer device divides every two of the 30 video frames into one group, for each group of video frames, the computer device performs matrix subtraction on the two video frames in the group of video frames, specifically, performs subtraction on pixel values of corresponding pixel points in the two video frames and takes an absolute value of a difference value to obtain a difference frame corresponding to the group of video frames, and the pixel value of each pixel point in the difference frame is the absolute value of the difference value of a corresponding pixel position, thereby obtaining 15 difference frames in total.
And the computer equipment sums the pixel values of the corresponding pixel points in the 15 difference frames to obtain the target frame after summation.
Step S3102, the original pixel value of each pixel point of the target frame is compared with a preset pixel threshold.
The original pixel value of each pixel point of the target frame is obtained by summing the pixel values of the corresponding pixel points in each difference frame by the computer device in the step S3101. The computer device compares the original pixel value of each pixel of the target frame with a preset pixel threshold, which can be flexibly set in implementation, for example, 120.
Step S3103, according to the comparison result, the original pixel values of the pixel points of the target frame are binarized to obtain a binary image.
For the pixel points with the original pixel value larger than 120 in the target frame, the computer device sets the pixel values of the pixel points to be the first value, namely 255; for the pixel points with the original pixel values not greater than 120 in the target frame, the computer device sets the pixel values of the pixel points to be second values, namely 0, so that a binary image is obtained. Only the black sea surface region (i.e., the second region) and the white sky region (i.e., the first region) exist in the binary image.
Step S320, obtaining an area ratio of the first region and the second region.
The computer device obtains an area ratio of the first region and the second region, i.e. obtains an area ratio of the sky region and the sea region, which is used to characterize a clarity of a sea-sky boundary between the sky region and the sea-surface region. In the embodiment of the application, when the imaging assembly is deployed, the sea-sky boundary can be kept at the middle position of the video frame by fixing the position of the imaging assembly, so that the sea-sky boundary is clearly visible under the condition that no advection fog is generated, the area of a sky region and the area of a sea surface region in the video frame are equal, namely the area ratio of a first region to a second region is equal to 1, and the visibility value of a target region is high under the condition; if advection fog is generated, the sea-sky boundary is fuzzy, the area of a sky region in a video frame is increased due to fog, the area of a sea surface region is correspondingly reduced, namely the area ratio of the first region to the second region is larger than 1, and the visibility value of a target region is low under the condition. Therefore, the larger the area ratio of the first area to the second area is, the less clear the characteristic sea-sky boundary is, and the lower the visibility value of the target area is; conversely, the smaller the area ratio of the first area to the second area is, the clearer the characteristic sea-sky boundary is, and the higher the visibility value of the target area is.
In one possible implementation of step S320, referring to fig. 4, step S320 includes step S3201, step S3202, and step S3203:
step S3201, traversing pixel values of each pixel point in the binary image, and determining a position coordinate of the first region and a position coordinate of the second region in the binary image.
Aiming at the binary image, the computer equipment traverses the pixel values of all the pixel points in the binary image, searches the pixel points of which the first pixel value is not 0 in each line and records the vertical coordinates of the searched pixel points; and searching the pixel point with the first pixel value not being 0 in each row, and recording the abscissa of each searched pixel point, thereby obtaining the position coordinate of the first area.
The computer equipment traverses the pixel values of all the pixel points in the binary image, searches the pixel point with the first pixel value of 0 in each line, and records the vertical coordinate of each searched pixel point; and searching the pixel point with the first pixel value of 0 in each row, and recording the abscissa of each searched pixel point, thereby obtaining the position coordinate of the second area.
Step 3202, the area of the first region and the area of the second region are calculated from the respective position coordinates.
The computer equipment can determine the height of the first region along the longitudinal axis according to the vertical coordinate of the pixel point of which the first pixel value of each row in the binary image is not 0, and can determine the length of the first region along the horizontal axis according to the horizontal coordinate of the pixel point of which the first pixel value of each column in the binary image is not 0, so that the area of the first region can be calculated.
The computer device calculates the area of the second region in a manner similar to the area calculation of the first region, and is not described herein again.
Step S3203, a ratio of the area of the first region to the area of the second region is calculated to obtain an area ratio.
And the computer equipment divides the area of the first area by the area of the second area to obtain the area ratio of the first area to the second area.
Step S330, according to the area ratio, finding the visibility value of the target area corresponding to the area ratio in a preset visibility mapping table.
In the embodiment of the application, the computer device can fit the mapping relation between each area ratio and each visibility value according to the area ratio between the first area and the second area at a plurality of historical moments and the visibility value manually observed at the corresponding moment to obtain the visibility mapping table, wherein the visibility mapping table comprises the mapping relation between each area ratio and each visibility value at the historical moments.
And the computer equipment searches the visibility value of the target area corresponding to the area ratio in the visibility mapping table according to the area ratio corresponding to the binary image.
Therefore, the computer device calculates the visibility value of the target area through the steps, and compared with the visibility obtained through manual visual inspection, the data reliability of the visibility value of the target area is improved, so that the reliable visibility value of the target area can be output to serve as a reference basis for safety control.
In one embodiment, based on the embodiment shown in fig. 1, the environmental data includes at least one of wind speed, wind direction, temperature and air humidity corresponding to the target area. Referring to fig. 5, the present embodiment relates to a process of how the computer device detects whether the environmental data is a preset condition. As shown in fig. 5, in the case that the environment data includes wind speed, wind direction, temperature and air humidity corresponding to the target area, the process includes:
step S110, detecting whether the wind speed is within a preset wind speed range, detecting whether the wind direction is a preset wind direction, and detecting whether the temperature is smaller than a preset temperature threshold value and detecting whether the air humidity is larger than a preset humidity threshold value.
In the embodiment of the present application, the environmental data includes a wind speed, a wind direction, a temperature, and an air humidity corresponding to the target area. The computer equipment determines the preset conditions that the wind speed included in the environmental data is within a preset wind speed range, the included wind direction is a preset wind direction, the included temperature is smaller than a preset temperature threshold value, and the included air humidity is larger than a preset humidity threshold value according to meteorological factors which are easy to form advection fog. For example, the preset wind speed range may be 2-7 m/s, if the imaging component is deployed at east and sea, the preset wind direction may be southeast wind, the preset temperature threshold may be a dew point temperature, the preset humidity threshold may be set according to manual experience, and the like.
The computer device detects whether the wind speed is within a preset wind speed range, detects whether the wind direction is a preset wind direction, detects whether the temperature is smaller than a preset temperature threshold value and detects whether the air humidity is larger than a preset humidity threshold value, and therefore the computer device can detect whether the current environment of the target area is the same as the meteorological factors which easily form advection fog through the step S101.
Step S120, if the wind speed is within a preset wind speed range, the wind direction is a preset wind direction, the temperature is smaller than a preset temperature threshold value, and the air humidity is larger than a preset humidity threshold value, it is determined that the environmental data meet a preset condition.
And if the computer equipment detects that the wind speed is within a preset wind speed range, the wind direction is a preset wind direction, the temperature is smaller than a preset temperature threshold value, and the air humidity is larger than a preset humidity threshold value, determining that the environmental data meets a preset condition.
Under the condition that the environmental data meet the preset conditions, the computer equipment detects whether the target image corresponding to the target area comprises the advection fog or not, and therefore accuracy of the advection fog detection can be improved.
In an embodiment, on the basis of the embodiment shown in fig. 1, referring to fig. 6, the video data of this embodiment is specifically obtained by a high-sensitivity imaging component deployed in a target area, and this embodiment relates to a process how a computer device automatically adjusts operating parameters of the high-sensitivity imaging component. As shown in fig. 6, the advection fog detection method of the present embodiment further includes:
and step S410, measuring a target brightness value corresponding to the target area through a luminance meter of the light instrument.
The computer device measures a target brightness value corresponding to the target area through a luminance meter of the light meter, the light intensity of the target area represented by different target brightness values is different, and the higher the target brightness value is, the higher the light intensity of the target area represented by the target area is.
Step S420, searching a preset parameter adjustment table according to the target brightness value to obtain a target imaging parameter corresponding to the target brightness value.
The parameter adjustment table includes a mapping relationship between each brightness value and each imaging parameter.
In this embodiment of the application, the mapping relationship between each brightness value and each imaging parameter may be obtained by fitting, by the computer device, the brightness value of the luminance meter of the light meter and the imaging parameter of the corresponding highly photosensitive imaging component at a plurality of historical times, where the imaging parameter includes exposure, gain, noise reduction, and the like.
And the computer equipment searches a preset parameter adjustment table according to the target brightness value to obtain a target imaging parameter corresponding to the target brightness value.
Step S430, setting the target imaging parameters as the operating parameters of the high-sensitivity imaging component.
In one possible embodiment, the parameter adjustment table is shown in table 1:
Luminance | agc | mshutter | denoise |
0 | 0x00 | 0x4f | 0x07 |
(0,750] | 0x0c | 0x49 | 0x05 |
(750,2700] | 0x00 | 0x40 | 0x00 |
(2700,∞) | 0x00 | 0x4a | 0x05 |
TABLE 1
Wherein, Luminance is the lumen value (brightness value) of the Luminance meter of the light meter, agc is the gain, mshutter is the exposure, denoise is the noise reduction.
As shown in Table 1, when the target brightness value is 0, the computer device adjusts the agc of the high-sensitive imaging component to 0x00, mshutter to 0x4f, denoise to 0x 07; when the target brightness value is 0-750, the computer equipment adjusts the agc of the high-sensitive imaging component to 0x0c, the mshutter to 0x49 and the dense to 0x 05; when the target brightness value is 750-2700, the computer equipment adjusts the agc of the high-light-sensitive imaging component to 0x00, the mshutter to 0x40 and the dense to 0x 00; when the target brightness value is greater than 2700, the computer apparatus adjusts the agc of the highly photosensitive imaging component to 0x00, mshutter to 0x0f, and denoise to 0x 07.
From this, the computer equipment is based on the target brightness value that the target zone that the light meter luminance meter measured corresponds, can the high sensitization imaging assembly's of self-adaptation adjustment working parameter, need not artifical adjustment, has promoted the adjustment timeliness of the working parameter of high sensitization imaging assembly to ensure the imaging effect of the video data of high sensitization imaging assembly collection, and then promoted the accuracy that the advection fog detected.
In the embodiment of the application, the high-sensitivity imaging component can be a high-sensitivity imager. Because of the performance limitation of the traditional camera, the computer equipment cannot identify the content in the pictures acquired by the traditional camera at night, so that the computer equipment cannot realize the night detection of the advection fog based on the pictures acquired by the traditional camera; and this application embodiment acquires video data through high sensitization imaging component, and video data includes a plurality of video frames, and computer equipment is through the working parameter of adjustment high sensitization imaging component for high sensitization imaging component also can gather the video frame similar with daytime imaging effect under night or the relatively poor environment of light, thereby can realize the all-weather detection of advection fog, has enlarged the detection range of advection fog, and then has promoted the detection reliability of advection fog.
It should be understood that although the various steps in the flow charts of fig. 1-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 7, there is provided an advection fog detecting apparatus, including:
the processing module 100 is configured to acquire environmental data and video data of a target area, and detect whether the environmental data meets a preset condition, where the preset condition is related to a meteorological factor formed by advection fog;
a detecting module 200, configured to detect whether a target image corresponding to the target area includes advection fog if the environment data meets the preset condition, where the target image is any one of a plurality of video frames included in the video data;
an obtaining module 300, configured to obtain, according to the video data, a visibility value of the target area if the target image includes advection fog.
In one embodiment, the obtaining module 300 includes:
a binarization unit, configured to obtain a binary image corresponding to the target region according to the plurality of video frames included in the video data, where the binary image includes a first region and a second region, the first region corresponds to a sky region, the second region corresponds to a sea surface region, pixel values of pixels in the first region are all first values, pixel values of pixels in the second region are all second values, and the first values and the second values are different;
a ratio obtaining unit, configured to obtain an area ratio of the first region and the second region, where the area ratio is used to represent a degree of definition of a sea-sky boundary between the sky region and the sea surface region;
and the searching unit is used for searching the visibility value of the target area corresponding to the area ratio in a preset visibility mapping table according to the area ratio.
In an embodiment, the binarization unit is specifically configured to perform matrix operation on the plurality of video frames to obtain a target frame; comparing the original pixel value of each pixel point of the target frame with a preset pixel threshold value; and according to the comparison result, carrying out binarization processing on the original pixel value of each pixel point of the target frame to obtain the binary image.
In one embodiment, the ratio obtaining unit is specifically configured to traverse pixel values of pixels in the binary image, and determine position coordinates of the first region and position coordinates of the second region in the binary image; calculating the area of the first region and the area of the second region according to each of the position coordinates; and calculating the ratio of the area of the first region to the area of the second region to obtain the area ratio.
In one embodiment, the detection module 200 includes:
the identification unit is used for inputting the target image into a advection fog identification model to obtain an identification result, the identification result is used for representing that the target image comprises advection fog or the target image does not comprise advection fog, and the advection fog identification model is obtained by training a plurality of advection fog sample images at different historical moments.
In one embodiment, the environmental data includes at least one of wind speed, wind direction, temperature, and air humidity corresponding to the target area; in a case that the environmental data includes the wind speed, the wind direction, the temperature, and the air humidity corresponding to the target area, the processing module 100 includes:
the detection unit is used for detecting whether the wind speed is within a preset wind speed range, detecting whether the wind direction is a preset wind direction, detecting whether the temperature is smaller than a preset temperature threshold value and detecting whether the air humidity is larger than a preset humidity threshold value;
the determining unit is used for determining that the environmental data meet the preset condition if the wind speed is within the preset wind speed range, the wind direction is the preset wind direction, the temperature is smaller than the preset temperature threshold value, and the air humidity is larger than the preset humidity threshold value.
In one embodiment, the video data is specifically acquired by a highly sensitive imaging component; the device further comprises:
the measuring module is used for measuring a target brightness value corresponding to the target area through a luminance meter of a light instrument;
the searching module is used for searching a preset parameter adjusting table according to the target brightness value to obtain a target imaging parameter corresponding to the target brightness value, and the parameter adjusting table comprises a mapping relation between each brightness value and each imaging parameter;
and the parameter setting module is used for setting the target imaging parameters as the working parameters of the high-light-sensitivity imaging component.
For specific limitations of the advection fog detection device, reference may be made to the above limitations of the advection fog detection method, which are not described herein again. All or part of the modules in the advection fog detection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data of the advection fog detection method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of advection fog detection.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring environmental data and video data of a target area, and detecting whether the environmental data meets a preset condition, wherein the preset condition is related to meteorological factors formed by advection fog;
if the environmental data meet the preset conditions, detecting whether a target image corresponding to the target area comprises advection fog, wherein the target image is any one of a plurality of video frames included in the video data;
and if the target image comprises advection fog, acquiring the visibility value of the target area according to the video data.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a binary image corresponding to the target area according to the plurality of video frames included in the video data, wherein the binary image includes a first area and a second area, the first area corresponds to a sky area, the second area corresponds to a sea area, pixel values of all pixel points in the first area are first values, pixel values of all pixel points in the second area are second values, and the first values and the second values are different;
obtaining an area ratio of the first region and the second region, the area ratio being used to characterize a clarity of a sea-sky boundary between the sky region and the sea-surface region;
and searching the visibility value of the target area corresponding to the area ratio in a preset visibility mapping table according to the area ratio.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing matrix operation on the plurality of video frames to obtain a target frame;
comparing the original pixel value of each pixel point of the target frame with a preset pixel threshold value;
and according to the comparison result, carrying out binarization processing on the original pixel value of each pixel point of the target frame to obtain the binary image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
traversing pixel values of all pixel points in the binary image, and determining the position coordinates of the first region and the second region in the binary image;
calculating the area of the first region and the area of the second region according to each of the position coordinates;
and calculating the ratio of the area of the first region to the area of the second region to obtain the area ratio.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
inputting the target image into an advection fog recognition model to obtain a recognition result, wherein the recognition result is used for representing that the target image comprises advection fog or the target image does not comprise advection fog, and the advection fog recognition model is obtained by training a plurality of advection fog sample images at different historical moments.
In one embodiment, the environmental data includes at least one of wind speed, wind direction, temperature, and air humidity corresponding to the target area; under the condition that the environment data comprises the wind speed, the wind direction, the temperature and the air humidity corresponding to the target area, the processor executes a computer program to further realize the following steps:
detecting whether the wind speed is within a preset wind speed range, detecting whether the wind direction is a preset wind direction, detecting whether the temperature is smaller than a preset temperature threshold value and detecting whether the air humidity is larger than a preset humidity threshold value;
and if the wind speed is within the preset wind speed range, the wind direction is the preset wind direction, the temperature is smaller than the preset temperature threshold value, and the air humidity is larger than the preset humidity threshold value, determining that the environmental data meets the preset condition.
In one embodiment, the video data is specifically acquired by a highly sensitive imaging component; the processor, when executing the computer program, further performs the steps of:
measuring a target brightness value corresponding to the target area through a luminance meter of a light instrument;
searching a preset parameter adjustment table according to the target brightness value to obtain a target imaging parameter corresponding to the target brightness value, wherein the parameter adjustment table comprises a mapping relation between each brightness value and each imaging parameter;
and setting the target imaging parameters as the working parameters of the high-sensitivity imaging assembly.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring environmental data and video data of a target area, and detecting whether the environmental data meets a preset condition, wherein the preset condition is related to meteorological factors formed by advection fog;
if the environmental data meet the preset conditions, detecting whether a target image corresponding to the target area comprises advection fog, wherein the target image is any one of a plurality of video frames included in the video data;
and if the target image comprises advection fog, acquiring the visibility value of the target area according to the video data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a binary image corresponding to the target area according to the plurality of video frames included in the video data, wherein the binary image includes a first area and a second area, the first area corresponds to a sky area, the second area corresponds to a sea area, pixel values of all pixel points in the first area are first values, pixel values of all pixel points in the second area are second values, and the first values and the second values are different;
obtaining an area ratio of the first region and the second region, the area ratio being used to characterize a clarity of a sea-sky boundary between the sky region and the sea-surface region;
and searching the visibility value of the target area corresponding to the area ratio in a preset visibility mapping table according to the area ratio.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing matrix operation on the plurality of video frames to obtain a target frame;
comparing the original pixel value of each pixel point of the target frame with a preset pixel threshold value;
and according to the comparison result, carrying out binarization processing on the original pixel value of each pixel point of the target frame to obtain the binary image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
traversing pixel values of all pixel points in the binary image, and determining the position coordinates of the first region and the second region in the binary image;
calculating the area of the first region and the area of the second region according to each of the position coordinates;
and calculating the ratio of the area of the first region to the area of the second region to obtain the area ratio.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the target image into an advection fog recognition model to obtain a recognition result, wherein the recognition result is used for representing that the target image comprises advection fog or the target image does not comprise advection fog, and the advection fog recognition model is obtained by training a plurality of advection fog sample images at different historical moments.
In one embodiment, the environmental data includes at least one of wind speed, wind direction, temperature, and air humidity corresponding to the target area; in case the environmental data comprises the wind speed, the wind direction, the temperature and the air humidity corresponding to the target area, the computer program when being executed by a processor further realizes the steps of:
detecting whether the wind speed is within a preset wind speed range, detecting whether the wind direction is a preset wind direction, detecting whether the temperature is smaller than a preset temperature threshold value and detecting whether the air humidity is larger than a preset humidity threshold value;
and if the wind speed is within the preset wind speed range, the wind direction is the preset wind direction, the temperature is smaller than the preset temperature threshold value, and the air humidity is larger than the preset humidity threshold value, determining that the environmental data meets the preset condition.
In one embodiment, the video data is specifically acquired by a highly sensitive imaging component; the computer program when executed by the processor further realizes the steps of:
measuring a target brightness value corresponding to the target area through a luminance meter of a light instrument;
searching a preset parameter adjustment table according to the target brightness value to obtain a target imaging parameter corresponding to the target brightness value, wherein the parameter adjustment table comprises a mapping relation between each brightness value and each imaging parameter;
and setting the target imaging parameters as the working parameters of the high-sensitivity imaging assembly.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method of advection mist detection, the method comprising:
acquiring environmental data and video data of a target area, and detecting whether the environmental data meets a preset condition, wherein the preset condition is related to meteorological factors formed by advection fog;
if the environmental data meet the preset conditions, detecting whether a target image corresponding to the target area comprises advection fog, wherein the target image is any one of a plurality of video frames included in the video data;
and if the target image comprises advection fog, acquiring the visibility value of the target area according to the video data.
2. The method of claim 1, wherein said obtaining visibility values of said target area from said video data comprises:
acquiring a binary image corresponding to the target area according to the plurality of video frames included in the video data, wherein the binary image includes a first area and a second area, the first area corresponds to a sky area, the second area corresponds to a sea area, pixel values of all pixel points in the first area are first values, pixel values of all pixel points in the second area are second values, and the first values and the second values are different;
obtaining an area ratio of the first region and the second region, the area ratio being used to characterize a clarity of a sea-sky boundary between the sky region and the sea-surface region;
and searching the visibility value of the target area corresponding to the area ratio in a preset visibility mapping table according to the area ratio.
3. The method according to claim 2, wherein the obtaining a binary image corresponding to the target region according to the plurality of video frames included in the video data comprises:
performing matrix operation on the plurality of video frames to obtain a target frame;
comparing the original pixel value of each pixel point of the target frame with a preset pixel threshold value;
and according to the comparison result, carrying out binarization processing on the original pixel value of each pixel point of the target frame to obtain the binary image.
4. The method of claim 2, wherein obtaining the area ratio of the first region and the second region comprises:
traversing pixel values of all pixel points in the binary image, and determining the position coordinates of the first region and the second region in the binary image;
calculating the area of the first region and the area of the second region according to each of the position coordinates;
and calculating the ratio of the area of the first region to the area of the second region to obtain the area ratio.
5. The method of claim 1, wherein the detecting whether the target image corresponding to the target area includes advection fog comprises:
inputting the target image into an advection fog recognition model to obtain a recognition result, wherein the recognition result is used for representing that the target image comprises advection fog or the target image does not comprise advection fog, and the advection fog recognition model is obtained by training a plurality of advection fog sample images at different historical moments.
6. The method of claim 1, wherein the environmental data includes at least one of wind speed, wind direction, temperature, and air humidity corresponding to the target area; under the circumstance that the environmental data include the wind speed, the wind direction, the temperature and the air humidity corresponding to the target area, detecting whether the environmental data satisfy a preset condition includes:
detecting whether the wind speed is within a preset wind speed range, detecting whether the wind direction is a preset wind direction, detecting whether the temperature is smaller than a preset temperature threshold value and detecting whether the air humidity is larger than a preset humidity threshold value;
and if the wind speed is within the preset wind speed range, the wind direction is the preset wind direction, the temperature is smaller than the preset temperature threshold value, and the air humidity is larger than the preset humidity threshold value, determining that the environmental data meets the preset condition.
7. The method according to claim 1, characterized in that the video data are acquired in particular by a highly sensitive imaging component; the method further comprises the following steps:
measuring a target brightness value corresponding to the target area through a luminance meter of a light instrument;
searching a preset parameter adjustment table according to the target brightness value to obtain a target imaging parameter corresponding to the target brightness value, wherein the parameter adjustment table comprises a mapping relation between each brightness value and each imaging parameter;
and setting the target imaging parameters as the working parameters of the high-sensitivity imaging assembly.
8. An advection fog detection apparatus, the apparatus comprising:
the processing module is used for acquiring environmental data and video data of a target area and detecting whether the environmental data meet preset conditions, wherein the preset conditions are related to meteorological factors formed by advection fog;
a detection module, configured to detect whether a target image corresponding to the target area includes advection fog if the environmental data meets the preset condition, where the target image is any one of a plurality of video frames included in the video data;
and the obtaining module is used for obtaining the visibility value of the target area according to the video data if the target image comprises advection fog.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010759990.2A CN111785094B (en) | 2020-07-31 | 2020-07-31 | Advection fog detection method and device, computer equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010759990.2A CN111785094B (en) | 2020-07-31 | 2020-07-31 | Advection fog detection method and device, computer equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111785094A true CN111785094A (en) | 2020-10-16 |
CN111785094B CN111785094B (en) | 2021-12-07 |
Family
ID=72765612
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010759990.2A Expired - Fee Related CN111785094B (en) | 2020-07-31 | 2020-07-31 | Advection fog detection method and device, computer equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111785094B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11372133B2 (en) * | 2019-03-28 | 2022-06-28 | Xiamen Kirincore Iot Technology Ltd. | Advection fog forecasting system and forecasting method |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6721644B2 (en) * | 2000-08-02 | 2004-04-13 | Alfred B. Levine | Vehicle drive override subsystem |
CN101281142A (en) * | 2007-12-28 | 2008-10-08 | 深圳先进技术研究院 | Method for measuring atmosphere visibility |
CN101419749A (en) * | 2008-11-20 | 2009-04-29 | 陈伟 | Low-visibility road traffic guiding method |
CN102426398A (en) * | 2011-08-25 | 2012-04-25 | 南通航运职业技术学院 | Course weather forecast method |
CN103186906A (en) * | 2011-12-28 | 2013-07-03 | 中国科学院沈阳自动化研究所 | Real-time infrared dynamic scene simulation method for multiple objects in sea and sky background |
CN103295401A (en) * | 2013-06-10 | 2013-09-11 | 中山市拓维电子科技有限公司 | Road surface weather condition monitoring system |
US20130342692A1 (en) * | 2011-01-26 | 2013-12-26 | Nanjing University | Ptz video visibility detection method based on luminance characteristic |
CN104634740A (en) * | 2013-11-12 | 2015-05-20 | 中国电信股份有限公司 | Monitoring method and monitoring device of haze visibility |
CN106254827A (en) * | 2016-08-05 | 2016-12-21 | 安徽金赛弗信息技术有限公司 | A kind of group's mist Intelligent Recognition method for early warning and device thereof |
KR101919814B1 (en) * | 2017-09-29 | 2018-11-19 | 주식회사 미래기후 | Fog forecasting system and fog forecasting method using the same |
CN109073779A (en) * | 2013-07-31 | 2018-12-21 | 气象预报公司 | Industry analysis system based on weather |
CN109409336A (en) * | 2018-11-30 | 2019-03-01 | 安徽继远软件有限公司 | A kind of dense fog early warning system and method based on image recognition |
CN109932758A (en) * | 2019-03-28 | 2019-06-25 | 厦门龙辉芯物联网科技有限公司 | A kind of advection fog forecast system and forecasting procedure |
US20190347778A1 (en) * | 2018-05-10 | 2019-11-14 | Eagle Technology, Llc | Method and system for a measure of visibility from a single daytime image |
CN111145120A (en) * | 2019-12-26 | 2020-05-12 | 上海眼控科技股份有限公司 | Visibility detection method and device, computer equipment and storage medium |
US20200195847A1 (en) * | 2017-08-31 | 2020-06-18 | SZ DJI Technology Co., Ltd. | Image processing method, and unmanned aerial vehicle and system |
-
2020
- 2020-07-31 CN CN202010759990.2A patent/CN111785094B/en not_active Expired - Fee Related
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6721644B2 (en) * | 2000-08-02 | 2004-04-13 | Alfred B. Levine | Vehicle drive override subsystem |
CN101281142A (en) * | 2007-12-28 | 2008-10-08 | 深圳先进技术研究院 | Method for measuring atmosphere visibility |
CN101419749A (en) * | 2008-11-20 | 2009-04-29 | 陈伟 | Low-visibility road traffic guiding method |
US20130342692A1 (en) * | 2011-01-26 | 2013-12-26 | Nanjing University | Ptz video visibility detection method based on luminance characteristic |
CN102426398A (en) * | 2011-08-25 | 2012-04-25 | 南通航运职业技术学院 | Course weather forecast method |
CN103186906A (en) * | 2011-12-28 | 2013-07-03 | 中国科学院沈阳自动化研究所 | Real-time infrared dynamic scene simulation method for multiple objects in sea and sky background |
CN103295401A (en) * | 2013-06-10 | 2013-09-11 | 中山市拓维电子科技有限公司 | Road surface weather condition monitoring system |
CN109073779A (en) * | 2013-07-31 | 2018-12-21 | 气象预报公司 | Industry analysis system based on weather |
CN104634740A (en) * | 2013-11-12 | 2015-05-20 | 中国电信股份有限公司 | Monitoring method and monitoring device of haze visibility |
CN106254827A (en) * | 2016-08-05 | 2016-12-21 | 安徽金赛弗信息技术有限公司 | A kind of group's mist Intelligent Recognition method for early warning and device thereof |
US20200195847A1 (en) * | 2017-08-31 | 2020-06-18 | SZ DJI Technology Co., Ltd. | Image processing method, and unmanned aerial vehicle and system |
KR101919814B1 (en) * | 2017-09-29 | 2018-11-19 | 주식회사 미래기후 | Fog forecasting system and fog forecasting method using the same |
US20190347778A1 (en) * | 2018-05-10 | 2019-11-14 | Eagle Technology, Llc | Method and system for a measure of visibility from a single daytime image |
CN109409336A (en) * | 2018-11-30 | 2019-03-01 | 安徽继远软件有限公司 | A kind of dense fog early warning system and method based on image recognition |
CN109932758A (en) * | 2019-03-28 | 2019-06-25 | 厦门龙辉芯物联网科技有限公司 | A kind of advection fog forecast system and forecasting procedure |
CN111145120A (en) * | 2019-12-26 | 2020-05-12 | 上海眼控科技股份有限公司 | Visibility detection method and device, computer equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
傅刚 等: "数值模拟和卫星反演大气能见度对比分析", 《中国海洋大学学报》 * |
宋晓姜 等: "非静力中13尺度模式对渤海地区水平能见度的模拟", 《海洋预报》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11372133B2 (en) * | 2019-03-28 | 2022-06-28 | Xiamen Kirincore Iot Technology Ltd. | Advection fog forecasting system and forecasting method |
Also Published As
Publication number | Publication date |
---|---|
CN111785094B (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111179334A (en) | Sea surface small-area oil spilling area detection system and detection method based on multi-sensor fusion | |
CN106570863A (en) | Detection method and device for power transmission line | |
CN106709903B (en) | PM2.5 concentration prediction method based on image quality | |
CN111476785B (en) | Night infrared light-reflecting water gauge detection method based on position recording | |
CN111932519A (en) | Weather prediction method and device, computer equipment and storage medium | |
CN109916415B (en) | Road type determination method, device, equipment and storage medium | |
CN116612103B (en) | Intelligent detection method and system for building structure cracks based on machine vision | |
CN115240089A (en) | Vehicle detection method of aerial remote sensing image | |
Varjo et al. | Image based visibility estimation during day and night | |
CN111785094B (en) | Advection fog detection method and device, computer equipment and readable storage medium | |
CN117690096B (en) | Contact net safety inspection system adapting to different scenes | |
KR102688780B1 (en) | Diagnostic method for facilities of power transmission using unmaned aerial vehicle | |
CN116778696B (en) | Visual-based intelligent urban waterlogging early warning method and system | |
CN116228756B (en) | Method and system for detecting bad points of camera in automatic driving | |
CN110765900B (en) | Automatic detection illegal building method and system based on DSSD | |
CN117495954A (en) | Icing thickness detection method and device, electronic equipment and storage medium | |
CN111914933A (en) | Snowfall detection method and device, computer equipment and readable storage medium | |
CN116563774B (en) | Automatic identification method and system for illegal mining of cultivated land in China | |
CN114782561B (en) | Smart agriculture cloud platform monitoring system based on big data | |
CN116794650A (en) | Millimeter wave radar and camera data fusion target detection method and device | |
KR20210044127A (en) | Visual range measurement and alarm system based on video analysis and method thereof | |
CN111736237A (en) | Radiation fog detection method and device, computer equipment and readable storage medium | |
CN114821035A (en) | Distance parameter identification method for infrared temperature measurement equipment of power equipment | |
CN113963230A (en) | Parking space detection method based on deep learning | |
TWI662509B (en) | Development of a disdrometer and particle tracking process thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Advection fog detection method, device, computer equipment and readable storage medium Effective date of registration: 20220211 Granted publication date: 20211207 Pledgee: Shanghai Bianwei Network Technology Co.,Ltd. Pledgor: SHANGHAI EYE CONTROL TECHNOLOGY Co.,Ltd. Registration number: Y2022310000023 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20211207 |
|
CF01 | Termination of patent right due to non-payment of annual fee |