CN114202542A - Visibility inversion method and device, computer equipment and storage medium - Google Patents

Visibility inversion method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114202542A
CN114202542A CN202210148393.5A CN202210148393A CN114202542A CN 114202542 A CN114202542 A CN 114202542A CN 202210148393 A CN202210148393 A CN 202210148393A CN 114202542 A CN114202542 A CN 114202542A
Authority
CN
China
Prior art keywords
visibility
target
sequence
image
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210148393.5A
Other languages
Chinese (zh)
Other versions
CN114202542B (en
Inventor
潘涛
李强
郑昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangji Technology Co ltd
Original Assignee
Xiangji Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangji Technology Wuhan Co ltd filed Critical Xiangji Technology Wuhan Co ltd
Priority to CN202210148393.5A priority Critical patent/CN114202542B/en
Publication of CN114202542A publication Critical patent/CN114202542A/en
Application granted granted Critical
Publication of CN114202542B publication Critical patent/CN114202542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a visibility inversion method, a visibility inversion device, computer equipment and a storage medium, wherein the visibility inversion method comprises the following steps: acquiring a target image at the current moment; obtaining a first pixel position set corresponding to the first target object sub-image set and a second pixel position set corresponding to the second target sub-image set; calculating a first standard deviation corresponding to a first pixel position set of a set value and a second standard deviation corresponding to a second pixel position set of the set value, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to a first target object subgraph set of the set value and a second visibility sequence corresponding to a second target object subgraph set of the set value; calculating a third visibility sequence of a set value target image based on a surging channel algorithm; and determining the target visibility of the target image of the set value according to the first visibility sequence of the set value, the second visibility sequence of the set value and the third visibility sequence of the set value. The method avoids the defect of the surging algorithm and improves the accuracy of visibility calculation.

Description

Visibility inversion method and device, computer equipment and storage medium
Technical Field
The invention relates to the field of meteorology, in particular to a visibility inversion method and device, computer equipment and a storage medium.
Background
Visibility is the maximum distance a sighted person can identify an object from the background. That is, the sky near the horizon is used as the background in the daytime, the outline of a dark target object on the ground with a visual angle larger than 20 degrees can be clearly seen, the object can be identified, and the luminous point of the target lamp can be clearly seen at night. The change of visibility mainly depends on the transparency of the atmosphere, and the weather phenomena such as fog, smoke, sand, dust, heavy snow, rain, fur and the like can make the atmosphere turbid and the transparency of the atmosphere small.
Although the scheme provided by the prior art can realize visibility calculation, the dark channel algorithm is not suitable for low visibility conditions due to the self limitation of the dark channel algorithm, and the visibility calculation based on the dark channel algorithm is not suitable in some conditions, for example, the calculation of the visibility obtained by calculation is not accurate enough and has large error due to the fact that the ambient light brightness required by the dark channel algorithm is easy to calculate and error.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in the prior art, a surging channel algorithm is only suitable for low visibility conditions, so that the problems of inaccurate and large error in visibility calculation by using the surging channel algorithm are solved.
In order to solve the technical problem, the invention provides a visibility inversion method, which comprises the following steps:
acquiring a target image of visibility to be calculated at the current moment;
obtaining a first pixel position set corresponding to a first target object sub-image set and a second pixel position set corresponding to a second target object sub-image set, wherein the first target object sub-image set and the second target object sub-image set are image regions extracted from a target image based on different image processing algorithms and corresponding to a plurality of targets;
calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set;
calculating a third visibility sequence of the target image based on a surging channel algorithm;
and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Optionally, the obtaining a first set of pixel positions corresponding to the first target object sub-graph set and a second set of pixel positions corresponding to the second target sub-graph set includes:
extracting a first target object sub-image set of the target image by using a Canny edge detection operator to obtain a first pixel position set; and the number of the first and second groups,
and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
Optionally, the step of constructing the visibility regression model includes:
acquiring a reference image with clear visibility;
selecting a plurality of target objects from the reference image, wherein the plurality of target objects comprise short-distance target objects and long-distance target objects;
removing the target objects of which the pixel values of the image edges are smaller than a first set threshold value from the plurality of target objects;
acquiring pixel positions of a plurality of rejected target objects and an actual distance between each target object and a camera;
adding white noise to the reference image to simulate fogging, and obtaining a group of simulated images;
calculating the visibility sequence of the group of analog images by utilizing a surging channel algorithm to obtain the standard deviation of the analog images;
and fitting to obtain a regression coefficient of the visibility regression model based on the visibility sequence and the corresponding standard deviation, and constructing the visibility regression model based on the regression coefficient.
Optionally, after obtaining the first visibility sequence corresponding to the first target object subgraph set and the second visibility sequence corresponding to the second target object subgraph set, the method further includes:
determining a plurality of visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, wherein the visibility combinations comprise a first visibility and a second visibility;
respectively calculating visibility difference values between the first visibility and the second visibility in each visibility combination;
if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
Optionally, the determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence includes:
determining a visibility grade corresponding to the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence;
determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade;
and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
Optionally, after the weighting processing is performed on the first visibility sequence, the second visibility sequence, and the third visibility sequence according to the determined weighting coefficients, and before the target visibility of the target image is obtained, the method further includes:
and performing edge extraction on the target objects in the first target object sub-image set and the second target object sub-image set after the unmatched target objects are removed, and removing the target objects of which the pixel values of the image edges are smaller than a set threshold value.
Optionally, the method further comprises:
and if no target object exists in the first target object sub-image set and the second target object sub-image after the removing, determining the target visibility determined at the last moment as the target visibility of the target image at the current moment.
Optionally, after determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence, the method further includes:
calculating a difference value between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment;
and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
In order to solve the above technical problem, the present invention provides a visibility inversion apparatus, including:
the target image acquisition module is used for acquiring a target image of the visibility to be calculated;
a target position obtaining module, configured to obtain a first set of pixel positions corresponding to a first target object sub-image set and a second set of pixel positions corresponding to a second target sub-image set, where the first target object sub-image set and the second target object sub-image set are image regions corresponding to multiple targets extracted from the target image based on different image processing algorithms;
the first visibility calculation module is used for calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance respectively to obtain a first visibility sequence corresponding to the first target object sub-graph set and a second visibility sequence corresponding to the second target object sub-graph set;
the second visibility calculation module is used for calculating a third visibility sequence of the target image based on a dark surge channel algorithm;
and the target visibility determining module is used for determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Optionally, the target position obtaining module is configured to:
extracting a first target object sub-image set of the target image by using a Canny edge detection operator to obtain a first pixel position set; and the number of the first and second groups,
and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
Optionally, the first visibility calculation module is further configured to:
after a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set are obtained, determining a plurality of visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, wherein the visibility combinations comprise first visibility and second visibility;
respectively calculating visibility difference values between the first visibility and the second visibility in each visibility combination;
if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
Optionally, the target visibility determining module is configured to:
determining a visibility grade corresponding to the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence;
determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade;
and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
Optionally, the target visibility determination module is further configured to:
after weighting processing is carried out on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients, edge extraction is carried out on the target objects in the first target object sub-image set and the second target object sub-image set after unmatched target objects are removed, the target objects with the pixel values of image edges smaller than a set threshold value are removed, and the target visibility of the target images is obtained based on the removed first target object sub-image set and the second target object sub-image set.
Optionally, the target visibility determination module is further configured to:
and if no target object exists in the first target object sub-image set and the second target object sub-image after the removing, determining the target visibility determined at the last moment as the target visibility of the target image at the current moment.
Optionally, the target visibility determination module is further configured to:
after determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence, calculating a difference value between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment;
and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
In order to solve the above technical problem, the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the above method when executing the computer program.
To solve the above technical problem, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above method.
Compared with the prior art, one or more embodiments in the above scheme can have the following advantages or beneficial effects:
by applying the visibility inversion method, the visibility inversion device, the computer equipment and the storage medium, a target image of visibility to be calculated at the current moment is obtained firstly in the visibility inversion process at the current moment; obtaining a first pixel position set corresponding to a first target object sub-image set and a second pixel position set corresponding to a second target object sub-image set, wherein the first target object sub-image set and the second target object sub-image set are image regions extracted from a target image based on different image processing algorithms and corresponding to a plurality of targets; calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set; calculating a third visibility sequence of the target image based on a surging channel algorithm; and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Therefore, in the visibility inversion process, two different extraction algorithms are selected for selecting the target object in the target image, and visibility values corresponding to the same target object subgraph obtained based on the different extraction algorithms are mutually verified, so that the problem that a single algorithm is easy to have large errors can be avoided; furthermore, the visibility calculated based on the regression model and the visibility based on the dark channel algorithm are subjected to weighted calculation to determine the final target visibility, so that the effect that the target visibility of a target image can be accurately calculated under the conditions of different visibility levels is achieved, the defect that the dark channel algorithm is only suitable for low-visibility scenes is avoided, the advantages of the dark channel algorithm can be fully utilized to calculate the visibility of low-visibility scenes, the defects of the dark channel algorithm can be made up to a certain extent, and the accurate visibility calculation result can be obtained under the scenes with high visibility.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a visibility inversion method according to an embodiment of the present invention;
fig. 2 is an edge image extracted by using a Canny edge extraction operator according to an embodiment of the present invention;
FIG. 3 is another flow chart of a visibility inversion method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a visibility inversion method according to an embodiment of the present invention;
FIG. 5 is a block diagram of a visibility inversion apparatus according to an embodiment of the present invention;
fig. 6 is a block diagram of a computer device provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problems that a dark surge channel algorithm is only suitable for low visibility conditions in the prior art, so that the visibility calculation by using the dark surge channel algorithm is not accurate enough and has large errors, the embodiment of the invention provides a visibility inversion method, a device, computer equipment and a storage medium.
The visibility inversion method provided by the embodiment of the invention is explained below.
Example one
As shown in fig. 1, a flowchart of a visibility inversion method provided in the present invention may include the following steps:
step S101: and acquiring a target image of the visibility to be calculated at the current moment.
Step S102: and obtaining a first pixel position set corresponding to the first target object sub-graph set and a second pixel position set corresponding to the second target sub-graph set.
Wherein the first target object sub-image set and the second target object sub-image set are image regions corresponding to a plurality of target objects extracted from the target images based on different image processing algorithms.
In one case, the obtaining a first set of pixel positions corresponding to a first sub-image set of the target object and a second set of pixel positions corresponding to a second sub-image set of the target object includes: extracting a first target object sub-image set of the target image by using a Canny edge detection operator to obtain the first pixel position set, please refer to fig. 2; and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
It should be noted that the above-mentioned Canny edge detection operator extraction method and the gray value extraction method based on the gray image are only two specific forms provided by the embodiment of the present invention, and should not be construed as a limitation to the present invention, and those skilled in the art can reasonably set the operation according to specific situations in practical applications, such as specific visibility application scenarios.
Step S103: and calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set.
In one case, the visibility regression model may be constructed in the following manner, and the specific steps include:
(1) acquiring a reference image with clear visibility;
(2) selecting a plurality of target objects from the reference image, wherein the plurality of target objects comprise short-distance target objects and long-distance target objects;
(3) removing the target objects of which the pixel values of the image edges are smaller than a first set threshold value from the plurality of target objects;
(4) acquiring pixel positions of a plurality of rejected target objects and an actual distance between each target object and a camera;
(5) adding white noise to the reference image to simulate fogging, and obtaining a group of simulated images;
(6) calculating the visibility sequence of the group of analog images by utilizing a surging channel algorithm to obtain the standard deviation of the analog images;
(7) and fitting to obtain a regression coefficient of the visibility regression model based on the visibility sequence and the corresponding standard deviation, and constructing the visibility regression model based on the regression coefficient.
For example, a cubic polynomial regression model can be established based on the above method, as shown in the following expression:
Figure 786688DEST_PATH_IMAGE001
wherein vis is visibility, a, b, c and d are regression coefficients, and x is standard deviation.
Step S104: and calculating a third visibility sequence of the target image based on a surging channel algorithm.
The following introduces the algorithm of the surging channel, assuming that the atmosphere is uniform, and the relationship between the visibility and the atmospheric extinction coefficient is shown in the following expression:
Figure 991404DEST_PATH_IMAGE002
wherein V is visibility, is an atmospheric extinction coefficient, and is a contrast threshold.
For the aviation work, 0.05 is generally adopted.
From the above expression, it can be seen that visibility can be obtained by obtaining the extinction coefficient from the target object to the lens. For the dark channel algorithm, it is considered that some channels and certain pixel values of non-sky areas in the image are close to 0, so the transmittance T can be considered as:
Figure 190304DEST_PATH_IMAGE003
wherein img is the target building image, and A is the ambient light background brightness.
Further, the relationship between the extinction coefficient and the transmittance is:
Figure 859183DEST_PATH_IMAGE004
wherein T is the transmittance and L is the distance from the target object to the camera.
Thus, the target visibility inversion formula in step S105 is obtained as follows:
Figure 288765DEST_PATH_IMAGE005
step S105: and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
In one implementation, the target visibility of the target image may be determined as follows: determining a visibility grade corresponding to the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence; determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade; and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
For example, if the visibility value at the previous moment is less than 3km, the weights of the first visibility sequence, the second visibility sequence and the third visibility sequence may be sequentially set to 0.2, 0.2 and 0.6 when the first visibility sequence, the second visibility sequence and the third visibility sequence are weighted and averaged; if the visibility value at the previous moment is greater than 3km and less than 5km, the weights of the first visibility sequence, the second visibility sequence and the third visibility sequence can be sequentially set to be 0.33, 0.33 and 0.34 when the first visibility sequence, the second visibility sequence and the third visibility sequence are weighted to average; if the visibility value at the previous moment is greater than 5km and less than 10km, the weights of the first visibility sequence, the second visibility sequence and the third visibility sequence can be sequentially set to be 0.4, 0.4 and 0.2 when the first visibility sequence, the second visibility sequence and the third visibility sequence are weighted to average; if the visibility value at the previous moment is greater than 10km, the weights of the first visibility sequence, the second visibility sequence and the third visibility sequence can be sequentially set to be 0.5, 0.5 and 0 when the first visibility sequence, the second visibility sequence and the third visibility sequence are weighted to average.
Further, in step S105, after the first visibility sequence, the second visibility sequence, and the third visibility sequence are weighted according to the determined weighting coefficients, and before the target visibility of the target image is obtained, the method may further include the following steps: and performing edge extraction on the target objects in the first target object sub-image set and the second target object sub-image set after the unmatched target objects are removed, and removing the target objects of which the pixel values of the image edges are smaller than a set threshold value.
In a preferred implementation manner of the present invention, if no target exists in the first target object subgraph set and the second target object subgraph after the removing, the target visibility determined at the previous moment is determined as the target visibility of the target image at the current moment, so as to ensure that the current moment is not empty of the visibility inversion result.
Therefore, in the visibility inversion process, two different extraction algorithms are selected for selecting the target object in the target image, and visibility values corresponding to the same target object subgraph obtained based on the different extraction algorithms are mutually verified, so that the problem that a single algorithm is easy to have large errors can be avoided; furthermore, the visibility calculated based on the regression model and the visibility based on the dark channel algorithm are subjected to weighted calculation to determine the final target visibility, so that the effect that the target visibility of a target image can be accurately calculated under the conditions of different visibility levels is achieved, the defect that the dark channel algorithm is only suitable for low-visibility scenes is avoided, the advantages of the dark channel algorithm can be fully utilized to calculate the visibility of low-visibility scenes, the defects of the dark channel algorithm can be made up to a certain extent, and the accurate visibility calculation result can be obtained under the scenes with high visibility.
Example two
As shown in fig. 3, another flowchart of the visibility inversion method provided in the present invention may include the following steps:
step S201: and acquiring a target image of the visibility to be calculated at the current moment.
Step S202: and obtaining a first pixel position set corresponding to the first target object sub-graph set and a second pixel position set corresponding to the second target sub-graph set.
Wherein the first target object sub-image set and the second target object sub-image set are image regions corresponding to a plurality of target objects extracted from the target images based on different image processing algorithms.
Step S203: and calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set.
Step S204: and determining a plurality of visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, wherein the visibility combinations comprise a first visibility and a second visibility.
Step S205: and respectively calculating the visibility difference value between the first visibility and the second visibility in each visibility combination.
Step S206: if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
Step S207: and calculating a third visibility sequence of the target image based on a surging channel algorithm.
Step S208: and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
It should be noted that the method embodiment shown in fig. 3 has all the advantages of the method embodiment shown in fig. 1, in addition, the method embodiment shown in fig. 3 is further improved on the basis of the method embodiment shown in fig. 1, specifically, after the first visibility sequence and the second visibility sequence are obtained based on the visibility regression model, visibility values in the respective sequences are not directly applied to perform mutual verification, but the visibility values of the respective visibility values are screened first, and the visibility that does not meet the set value is removed, so that the accuracy of calculating the final target visibility is further improved.
EXAMPLE III
As shown in fig. 4, another flowchart of the visibility inversion method provided in the present invention may include the following steps:
step S301: and acquiring a target image of the visibility to be calculated at the current moment.
Step S302: and obtaining a first pixel position set corresponding to the first target object sub-graph set and a second pixel position set corresponding to the second target sub-graph set.
Wherein the first target object sub-image set and the second target object sub-image set are image regions corresponding to a plurality of target objects extracted from the target images based on different image processing algorithms.
Step S303: and calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set.
Step S304: and calculating a third visibility sequence of the target image based on a surging channel algorithm.
Step S305: and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Step S306: and calculating the difference value between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment.
Step S307: and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
It can be understood that, in the process of comparing the target visibility at the current moment with the target visibility at the previous moment, since the change trend of the visibility itself, which is gradually clear or gradually blurred, is indefinite, the target visibility at the current moment may be higher than the target visibility at the previous moment, and a situation that the target visibility is lower than the target visibility at the previous moment may also occur. Therefore, in the process of calibrating the target visibility at the current moment by using the target visibility at the previous moment, an adjustment proportion can be set, for example, the adjustment proportion can be set to be 10%, so that if the target visibility at the current moment is greater than the target visibility at the last moment by 10%, the target visibility at the last moment is multiplied by 110% to be used as the target visibility at the current moment; on the contrary, if the target visibility at the current moment is less than 10% of the target visibility at the previous moment, the target visibility at the previous moment is multiplied by 90% to be used as the target visibility at the current moment; and if the difference value between the target visibility at the current moment and the target visibility at the previous moment is between 90 and 110 percent, directly outputting the target visibility at the current moment, namely indicating that the value of the current visibility is accurate without calibration.
The visibility inversion apparatus provided in the embodiment of the present invention is explained below.
Example four
As shown in fig. 5, a block diagram of a visibility inversion apparatus provided in an embodiment of the present invention includes:
a target image obtaining module 410, configured to obtain a target image of visibility to be calculated;
a target position obtaining module 420, configured to obtain a first set of pixel positions corresponding to a first target object sub-image set and a second set of pixel positions corresponding to a second target object sub-image set, where the first target object sub-image set and the second target object sub-image set are image regions corresponding to multiple targets extracted from the target image based on different image processing algorithms;
a first visibility calculation module 430, configured to calculate a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and input the first standard deviation and the second standard deviation into a visibility regression model that is constructed in advance, to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set;
the second visibility calculation module 440 is configured to calculate a third visibility sequence of the target image based on a surging channel algorithm;
a target visibility determining module 450, configured to determine target visibility of the target image according to the first visibility sequence, the second visibility sequence, and the third visibility sequence.
In one case, the target position obtaining module 420 is configured to extract a first target object sub-image set of the target image by using a Canny edge detection operator to obtain the first pixel position set; and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
In one case, the first visibility calculating module 430 is further configured to determine, after obtaining a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set, multiple visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, where the visibility combinations include a first visibility and a second visibility; respectively calculating visibility difference values between the first visibility and the second visibility in each visibility combination; if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
In another case, the target visibility determining module 450 is configured to determine, according to the first visibility sequence, the second visibility sequence, and the third visibility sequence, a visibility level corresponding to the target image; determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade; and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
In another case, the target visibility determining module 450 is further configured to, after performing weighting processing on the first visibility sequence, the second visibility sequence, and the third visibility sequence according to the determined weighting coefficients, perform edge extraction on the target objects in the first target object sub-graph set and the second target object sub-graph set from which the unmatched target objects are removed, remove the target objects whose image edges have pixel values smaller than a set threshold, and obtain the target visibility of the target image based on the removed first target object sub-graph set and the removed second target object sub-graph set.
In another case, the target visibility determining module 450 is further configured to determine, if no target exists in the first target object subgraph set and the second target object subgraph after the removing, the target visibility determined at the previous moment as the target visibility of the target image at the current moment.
In another situation, the target visibility determining module 450 is further configured to calculate a difference between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment after determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence; and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
By applying the visibility inversion device provided by the invention, in the visibility inversion process at the current moment, a target image of the visibility to be calculated at the current moment is obtained; obtaining a first pixel position set corresponding to a first target object sub-image set and a second pixel position set corresponding to a second target object sub-image set, wherein the first target object sub-image set and the second target object sub-image set are image regions extracted from a target image based on different image processing algorithms and corresponding to a plurality of targets; calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set; calculating a third visibility sequence of the target image based on a surging channel algorithm; and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Therefore, in the visibility inversion process, two different extraction algorithms are selected for selecting the target object in the target image, and visibility values corresponding to the same target object subgraph obtained based on the different extraction algorithms are mutually verified, so that the problem that a single algorithm is easy to have large errors can be avoided; furthermore, the visibility calculated based on the regression model and the visibility based on the dark channel algorithm are subjected to weighted calculation to determine the final target visibility, so that the effect that the target visibility of a target image can be accurately calculated under the conditions of different visibility levels is achieved, the defect that the dark channel algorithm is only suitable for low-visibility scenes is avoided, the advantages of the dark channel algorithm can be fully utilized to calculate the visibility of low-visibility scenes, the defects of the dark channel algorithm can be made up to a certain extent, and the accurate visibility calculation result can be obtained under the scenes with high visibility.
EXAMPLE five
To solve the above technical problem, the present invention provides a computer device, as shown in fig. 6, including a memory 510, a processor 520, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method as described above.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer device may include, but is not limited to, a processor 520, a memory 510. Those skilled in the art will appreciate that fig. 6 is merely an example of a computing device and is not intended to be limiting and may include more or fewer components than those shown, or some of the components may be combined, or different components, e.g., the computing device may also include input output devices, network access devices, buses, etc.
The Processor 520 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 510 may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. The memory 510 may also be an external storage device of a computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device. Further, the memory 510 may also include both internal storage units and external storage devices of the computer device. The memory 510 is used for storing the computer programs and other programs and data required by the computer device. The memory 510 may also be used to temporarily store data that has been output or is to be output.
EXAMPLE six
The embodiment of the present application further provides a computer-readable storage medium, which may be a computer-readable storage medium contained in the memory in the foregoing embodiment; or it may be a computer-readable storage medium that exists separately and is not incorporated into a computer device. The computer-readable storage medium stores one or more computer programs which, when executed by a processor, implement the methods described above.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory 510, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
For system or apparatus embodiments, since they are substantially similar to method embodiments, they are described in relative simplicity, and reference may be made to some descriptions of method embodiments for related points.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a described condition or event is detected" may be interpreted, depending on the context, to mean "upon determining" or "in response to determining" or "upon detecting a described condition or event" or "in response to detecting a described condition or event".
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (11)

1. A visibility inversion method, comprising:
acquiring a target image of visibility to be calculated at the current moment;
obtaining a first pixel position set corresponding to a first target object sub-image set and a second pixel position set corresponding to a second target object sub-image set, wherein the first target object sub-image set and the second target object sub-image set are image regions extracted from a target image based on different image processing algorithms and corresponding to a plurality of targets;
calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set;
calculating a third visibility sequence of the target image based on a surging channel algorithm;
and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
2. The visibility inversion method of claim 1, wherein the obtaining a first set of pixel locations corresponding to a first set of target object sub-images and a second set of pixel locations corresponding to a second set of target sub-images comprises:
extracting a first target object sub-image set of the target image by using a Canny edge detection operator to obtain a first pixel position set; and the number of the first and second groups,
and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
3. The visibility inversion method as defined in claim 1, wherein the step of constructing the visibility regression model comprises:
acquiring a reference image with clear visibility;
selecting a plurality of target objects from the reference image, wherein the plurality of target objects comprise short-distance target objects and long-distance target objects;
removing the target objects of which the pixel values of the image edges are smaller than a first set threshold value from the plurality of target objects;
acquiring pixel positions of a plurality of rejected target objects and an actual distance between each target object and a camera;
adding white noise to the reference image to simulate fogging, and obtaining a group of simulated images;
calculating the visibility sequence of the group of analog images by utilizing a surging channel algorithm to obtain the standard deviation of the analog images;
and fitting to obtain a regression coefficient of the visibility regression model based on the visibility sequence and the corresponding standard deviation, and constructing the visibility regression model based on the regression coefficient.
4. The visibility inversion method according to claim 1, wherein after obtaining a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set, the visibility inversion method further comprises:
determining a plurality of visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, wherein the visibility combinations comprise a first visibility and a second visibility;
respectively calculating visibility difference values between the first visibility and the second visibility in each visibility combination;
if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
5. The visibility inversion method according to claim 4, wherein determining a target visibility of the target image from the first visibility sequence, the second visibility sequence, and the third visibility sequence comprises:
determining a visibility grade corresponding to the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence;
determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade;
and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
6. The visibility inversion method according to claim 5, wherein after the weighting processing is performed on the first visibility sequence, the second visibility sequence, and the third visibility sequence according to the determined weighting coefficients, and before the target visibility of the target image is obtained, the visibility inversion method further includes:
and performing edge extraction on the target objects in the first target object sub-image set and the second target object sub-image set after the unmatched target objects are removed, and removing the target objects of which the pixel values of the image edges are smaller than a set threshold value.
7. The visibility inversion method as defined in claim 6, further comprising:
and if no target object exists in the first target object sub-image set and the second target object sub-image after the removing, determining the target visibility determined at the last moment as the target visibility of the target image at the current moment.
8. The visibility inversion method according to claim 1, after determining a target visibility of the target image according to the first visibility sequence, the second visibility sequence, and the third visibility sequence, the visibility inversion method further comprising:
calculating a difference value between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment;
and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
9. A visibility inversion apparatus, comprising:
the target image acquisition module is used for acquiring a target image of the visibility to be calculated;
a target position obtaining module, configured to obtain a first set of pixel positions corresponding to a first target object sub-image set and a second set of pixel positions corresponding to a second target sub-image set, where the first target object sub-image set and the second target object sub-image set are image regions corresponding to multiple targets extracted from the target image based on different image processing algorithms;
the first visibility calculation module is used for calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance respectively to obtain a first visibility sequence corresponding to the first target object sub-graph set and a second visibility sequence corresponding to the second target object sub-graph set;
the second visibility calculation module is used for calculating a third visibility sequence of the target image based on a dark surge channel algorithm;
and the target visibility determining module is used for determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202210148393.5A 2022-02-18 2022-02-18 Visibility inversion method and device, computer equipment and storage medium Active CN114202542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210148393.5A CN114202542B (en) 2022-02-18 2022-02-18 Visibility inversion method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210148393.5A CN114202542B (en) 2022-02-18 2022-02-18 Visibility inversion method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114202542A true CN114202542A (en) 2022-03-18
CN114202542B CN114202542B (en) 2022-04-19

Family

ID=80645655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210148393.5A Active CN114202542B (en) 2022-02-18 2022-02-18 Visibility inversion method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114202542B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080107314A1 (en) * 2006-09-28 2008-05-08 Siemens Corporate Research, Inc. System and Method For Simultaneously Subsampling Fluoroscopic Images and Enhancing Guidewire Visibility
CN101382497A (en) * 2008-10-06 2009-03-11 南京大学 Visibility detecting method based on monitoring video of traffic condition
CN105931220A (en) * 2016-04-13 2016-09-07 南京邮电大学 Dark channel experience and minimal image entropy based traffic smog visibility detection method
CN109214470A (en) * 2018-10-25 2019-01-15 中国人民解放军国防科技大学 Image visibility detection method based on coding network fine adjustment
CN111191629A (en) * 2020-01-07 2020-05-22 中国人民解放军国防科技大学 Multi-target-based image visibility detection method
CN112017243A (en) * 2020-08-26 2020-12-01 大连信维科技有限公司 Medium visibility identification method
US20200393845A1 (en) * 2019-06-14 2020-12-17 Tusimple, Inc. Image fusion for autonomous vehicle operation
CN112180472A (en) * 2020-09-28 2021-01-05 南京北极光智能科技有限公司 Atmospheric visibility integrated forecasting method based on deep learning
CN112419231A (en) * 2020-10-15 2021-02-26 上海眼控科技股份有限公司 Visibility determination method and device, computer equipment and storage medium
CN112649900A (en) * 2020-11-27 2021-04-13 上海眼控科技股份有限公司 Visibility monitoring method, device, equipment, system and medium
CN113723199A (en) * 2021-08-03 2021-11-30 南京邮电大学 Airport low visibility detection method, device and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080107314A1 (en) * 2006-09-28 2008-05-08 Siemens Corporate Research, Inc. System and Method For Simultaneously Subsampling Fluoroscopic Images and Enhancing Guidewire Visibility
CN101382497A (en) * 2008-10-06 2009-03-11 南京大学 Visibility detecting method based on monitoring video of traffic condition
CN105931220A (en) * 2016-04-13 2016-09-07 南京邮电大学 Dark channel experience and minimal image entropy based traffic smog visibility detection method
CN109214470A (en) * 2018-10-25 2019-01-15 中国人民解放军国防科技大学 Image visibility detection method based on coding network fine adjustment
US20200393845A1 (en) * 2019-06-14 2020-12-17 Tusimple, Inc. Image fusion for autonomous vehicle operation
CN111191629A (en) * 2020-01-07 2020-05-22 中国人民解放军国防科技大学 Multi-target-based image visibility detection method
CN112017243A (en) * 2020-08-26 2020-12-01 大连信维科技有限公司 Medium visibility identification method
CN112180472A (en) * 2020-09-28 2021-01-05 南京北极光智能科技有限公司 Atmospheric visibility integrated forecasting method based on deep learning
CN112419231A (en) * 2020-10-15 2021-02-26 上海眼控科技股份有限公司 Visibility determination method and device, computer equipment and storage medium
CN112649900A (en) * 2020-11-27 2021-04-13 上海眼控科技股份有限公司 Visibility monitoring method, device, equipment, system and medium
CN113723199A (en) * 2021-08-03 2021-11-30 南京邮电大学 Airport low visibility detection method, device and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SONG Y ET AL: "An Atmospheric Visibility Grading Method Based on Ensemble Learning and Stochastic Weight Average", 《ATMOSPHERE》 *
唐绍恩等: "一种基于迁移学习的能见度检测方法", 《计算机工程》 *

Also Published As

Publication number Publication date
CN114202542B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN109002795B (en) Lane line detection method and device and electronic equipment
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
Negru et al. Exponential contrast restoration in fog conditions for driving assistance
CN110598541B (en) Method and equipment for extracting road edge information
CN108416784B (en) Method and device for rapidly extracting boundary of urban built-up area and terminal equipment
CN110781756A (en) Urban road extraction method and device based on remote sensing image
CN111724430A (en) Image processing method and device and computer readable storage medium
CN111080526A (en) Method, device, equipment and medium for measuring and calculating farmland area of aerial image
CN109461133B (en) Bridge bolt falling detection method and terminal equipment
CN110443242B (en) Reading frame detection method, target recognition model training method and related device
CN110926475A (en) Unmanned aerial vehicle waypoint generation method and device and electronic equipment
CN110852207A (en) Blue roof building extraction method based on object-oriented image classification technology
CN114898321B (en) Road drivable area detection method, device, equipment, medium and system
CN109978903B (en) Identification point identification method and device, electronic equipment and storage medium
CN113962877B (en) Pixel distortion correction method, correction device and terminal
CN113343945B (en) Water body identification method and device, electronic equipment and storage medium
CN117935063A (en) Method, device and equipment for inverting optical thickness of aerosol based on RSSDM nights
CN113284066B (en) Automatic cloud detection method and device for remote sensing image
US10621430B2 (en) Determining image forensics using an estimated camera response function
CN114202542B (en) Visibility inversion method and device, computer equipment and storage medium
CN117197068A (en) Mist concentration estimation method, device, equipment and storage medium
CN110765875B (en) Method, equipment and device for detecting boundary of traffic target
CN111539975A (en) Method, device and equipment for detecting moving target and storage medium
CN111104965A (en) Vehicle target identification method and device
CN113628145B (en) Image sharpening method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 430223 Room 302, floor 3, building C, Jinglun Park, No. 70, Guanggu Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province

Patentee after: Xiangji Technology Co.,Ltd.

Address before: 430223 Room 302, floor 3, building C, Jinglun Park, No. 70, Guanggu Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province

Patentee before: Xiangji Technology (Wuhan) Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A visibility inversion method, device, computer equipment, and storage medium

Granted publication date: 20220419

Pledgee: Guanggu Branch of Wuhan Rural Commercial Bank Co.,Ltd.

Pledgor: Xiangji Technology Co.,Ltd.

Registration number: Y2024980009731