CN112017243B - Medium visibility recognition method - Google Patents

Medium visibility recognition method Download PDF

Info

Publication number
CN112017243B
CN112017243B CN202010869849.8A CN202010869849A CN112017243B CN 112017243 B CN112017243 B CN 112017243B CN 202010869849 A CN202010869849 A CN 202010869849A CN 112017243 B CN112017243 B CN 112017243B
Authority
CN
China
Prior art keywords
visibility
target
algorithm
image
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010869849.8A
Other languages
Chinese (zh)
Other versions
CN112017243A (en
Inventor
王锡纲
李杨
赵育慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Xinwei Technology Co ltd
Original Assignee
Dalian Xinwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Xinwei Technology Co ltd filed Critical Dalian Xinwei Technology Co ltd
Priority to CN202010869849.8A priority Critical patent/CN112017243B/en
Publication of CN112017243A publication Critical patent/CN112017243A/en
Application granted granted Critical
Publication of CN112017243B publication Critical patent/CN112017243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of visibility recognition, and provides a medium visibility recognition method, which comprises the following steps: the method comprises the steps of collecting video data of a target object through a binocular camera, and collecting visibility data through a visibility tester; respectively extracting the positions of the target objects from two paths of video signals acquired by the binocular camera by using a target segmentation algorithm; performing feature matching on the obtained extraction result of the target object; obtaining distance information of a target object by using a binocular distance measuring algorithm; predicting the image quality visibility of each frame image in two paths of video signals acquired by the binocular camera by using an image quality prediction visibility algorithm; predicting the visibility of the target visual effect on each frame image in the two paths of video signals acquired by the binocular camera by using a target visual effect prediction visibility algorithm; and carrying out final visibility prediction by utilizing a visibility balance algorithm. The invention can improve the accuracy of medium visibility recognition and adapt to various environments.

Description

Medium visibility recognition method
Technical Field
The invention relates to the technical field of visibility recognition, in particular to a medium visibility recognition method.
Background
The visibility recognition has important significance in the aspects of navigation, transportation and the like, various potential safety hazards can be caused by severe weather conditions and marine environments, the life and property safety of people are related, and if related departments can accurately issue corresponding visibility conditions, the management quality of various industries can be improved.
Common visibility recognition methods include manual visual inspection and mechanical inspection. The manual visual inspection method judges the visibility by arranging a special observation station at each site, and the visual inspection method is poor in standardization and objectivity due to the fact that the visual inspection method only depends on human eye resolution and subjective judgment; the device measurement method calculates the visibility by measuring the transmissivity, extinction coefficient and the like through devices such as a transmission type visibility meter, a laser radar visibility meter and the like, and the devices have high price, high field requirements and large limitation, so the device cannot be widely used.
Disclosure of Invention
The invention mainly solves the technical problems of high price, small application range, low recognition accuracy and the like of medium visibility recognition in the prior art, and provides a medium visibility recognition method so as to achieve the purposes of improving the accuracy of medium visibility recognition and adapting to various environments.
The invention provides a medium visibility recognition method, which comprises the following steps:
step 100, acquiring video data of a target object through a binocular camera, and acquiring visibility data through a visibility tester to obtain two paths of video signals and a visibility signal;
Step 200, respectively extracting the positions of the target objects from two paths of video signals acquired by the binocular camera by using a target segmentation algorithm to obtain extraction results of the target objects;
Step 300, performing feature matching on the obtained extraction result of the target object;
Step 400, obtaining the distance information of the target object by using a binocular distance measuring algorithm, and further obtaining the deviation between the detection distance of the target object and the actual distance;
Step 500, predicting the image quality visibility of each frame image in two paths of video signals acquired by a binocular camera by using an image quality prediction visibility algorithm, and predicting the obtained visibility interval;
step 600, predicting the visibility of the target visual effect for each frame image in two paths of video signals acquired by the binocular camera by using a target visual effect prediction visibility algorithm, and predicting the obtained visibility interval;
at step 700, final visibility prediction is performed using a visibility balance algorithm.
Further, step 200 includes the following process:
Step 201, convolutional neural network extraction features are performed on each frame of image in two paths of video signals;
Step 202, performing preliminary classification and regression by using a region extraction network;
step 203, performing alignment operation on the candidate frame feature map;
And 204, classifying, regressing and dividing the target by using the convolutional neural network to obtain an extraction result of the target object.
Further, step 300 includes the following process:
Step 301, extracting key points from two target contours;
step 302, positioning the key points;
Step 303, determining feature vectors of the key points according to the positioned key points;
step 304, the key points are matched through the feature vectors of the key points.
Further, step 400 includes the following process:
step 401, calibrating a binocular camera;
Step 402, performing binocular correction on a binocular camera;
Step 403, performing binocular matching on the image acquired by the binocular camera;
step 404, calculating depth information of the binocular-matched image to obtain distance information of the target object in the image.
Further, step 500 includes the following process:
Step 501, dividing an image to realize identification and positioning of a target;
step 502, predicting the image visibility according to the target identification and positioning results to obtain an image classification result.
Further, step 600 includes the following process:
Step 601, constructing a target visual effect prediction visibility algorithm network structure;
step 602, inputting the extraction result of the target object obtained in the step 200 into a network structure of a target visual effect prediction visibility algorithm to obtain a multi-scale feature map;
And 603, classifying the images through a target visual effect prediction visibility algorithm network structure to obtain a target image classification result, and realizing a predicted visibility interval.
Further, the target visual effect prediction visibility algorithm network structure includes: the system comprises an input layer, a convolution layer, a first extraction feature module, a merging channel, a second extraction feature module, a merging channel, a full connection layer and a classification structure output layer; wherein each extracted feature module includes 5 convolution kernels.
Further, step 700 includes the following process:
Step 701, constructing a visibility balance algorithm network structure, wherein the visibility balance algorithm network structure comprises an input layer, a circulating neural network, a full-connection layer and a visibility interval output layer;
Step 702, sequentially inputting the visibility into a cyclic neural network to obtain a result considering a time sequence;
And step 703, connecting the output of the cyclic neural network with a full-connection layer to obtain a visibility interval value corresponding to the time sequence.
According to the medium visibility recognition method provided by the invention, a binocular camera is utilized to capture video images in all weather, a target object obtained by a ranging algorithm is utilized to detect deviation between distance and actual distance, a visibility section obtained by a visibility algorithm is predicted by image quality, a visibility section obtained by the visibility algorithm is predicted by a target visual effect, and final visibility prediction is performed by a visibility balance algorithm according to the result. The method can identify the current medium visibility, has high accuracy and stability for identifying the medium visibility, has strong adaptability to common various conditions, and does not depend on specific video acquisition equipment. Each point location of the invention uses a binocular camera to acquire video data. The binocular camera has the advantage of achieving multiple purposes. The two lenses can be used independently, each lens can be used as an independent video signal source, two paths of signals are subjected to cross verification, and sensitivity to distance can be increased by combining the two paths of signals.
The invention can be applied to submarine visibility recognition, harbor atmospheric visibility recognition and other scenes needing medium visibility recognition. When the atmospheric visibility of the harbor is identified, the harbor application scene is analyzed, so that the harbor area is large, the operation area is widely distributed, and therefore, the identification points are required to be subjected to multipoint deployment according to the operation area. The construction in the harbor district is relatively mature, and the topography feature and the building outward appearance are relatively stable. The method is convenient for setting the detection reference point at each point position, and is more beneficial to improving the stability and accuracy of identification. The binocular cameras are arranged in a harbor area in multiple points, and video data of all points at the same time can be obtained through timestamp control of the system. Meanwhile, the video data is an image sequence in the time dimension, so that the method can obtain the atmospheric visibility data of different time periods and different places for port business personnel to use.
Drawings
FIG. 1 is a flow chart of an implementation of a method for identifying visibility of a medium provided by the present invention;
FIG. 2 is a schematic diagram of a feature pyramid network architecture;
FIG. 3 is a schematic view of a bottom-up configuration;
FIGS. 4a-4e are schematic illustrations of the generation of a feature map at each stage in a bottom-up configuration;
FIG. 5 is a schematic diagram of a regional extraction network architecture;
FIG. 6 is an effect diagram of a feature map alignment operation;
FIG. 7 is a schematic diagram of a classification, regression, segmentation network architecture;
FIG. 8 is a schematic diagram of a binocular distance algorithm;
FIG. 9 is a schematic illustration of the basic principle of binocular range;
FIG. 10 is a schematic diagram of an image segmentation network architecture;
FIG. 11 is a schematic diagram of an image visibility prediction network architecture;
FIG. 12 is a schematic diagram of a network architecture of a target visual effect prediction visibility algorithm;
FIG. 13 is a schematic diagram of a visibility balance algorithm network architecture;
fig. 14a-14b are schematic diagrams of the structure of a recurrent neural network.
Detailed Description
In order to make the technical problems solved by the invention, the technical scheme adopted and the technical effects achieved clearer, the invention is further described in detail below with reference to the accompanying drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present invention are shown in the accompanying drawings.
Fig. 1 is a flowchart of an implementation of a medium visibility recognition method provided by the present invention. As shown in fig. 1, a medium visibility identifying method provided by an embodiment of the present invention includes:
And 100, acquiring video data of a target object through a binocular camera, and acquiring visibility data through a visibility tester to obtain two paths of video signals and a visibility signal.
Since the output of the visibility recognition of the present invention is a discrete value, i.e., a numerical range, such as the description of "500 meters or less", "500 meters to 1000 meters". Therefore, in order to improve the detection accuracy, the discrete values detected by other algorithms are corrected by using the continuous values obtained by the ranging algorithm, i.e., the detection distances, as described in "245.87 meters", "1835.64 meters". It is therefore necessary to provide a target reference. Selection principle of target reference object: a stationary object of constant position; under the condition of good visibility, objects can be clearly identified at daytime and night; no shielding exists between the binocular camera and the target object; the distance between the target object and the binocular camera accords with the distribution of the visibility interval, and the distribution is uniform. The distance difference is preferably about 100 meters.
And 200, respectively extracting the positions of the target objects from two paths of video signals acquired by the binocular camera by using a target segmentation algorithm to obtain the extraction results of the target objects.
The positions of the target objects are respectively extracted from two paths of video signals of the binocular camera, and the purpose is to prevent subsequent calculation failure caused by error of single-path detection. The accurate contour of the target object is obtained by adopting an accurate target segmentation algorithm, so that the accurate position of the target object can be extracted, and preparation is made for subsequent processing.
In view of the fact that the field of view (angle of view, focal length, etc.) of the binocular camera does not change easily, the position where the object appears in the image is theoretically fixed. However, in practice, it must be considered that the camera may shake under the action of wind, sea wave, or other external forces, which may cause slight change in the field of view, or even that the interference objects such as birds, fish shoals, etc. may appear in the field of view, so as to increase the accuracy of detection, when the target segmentation algorithm processes the target segmentation algorithm, a hot spot area in the field of view may be set according to the prior condition, and the weight of the detected target object in the hot spot area may be increased.
Through a target segmentation algorithm, the accurate outlines of two targets can be obtained in theory in two paths of video frame signals of the binocular camera. As used herein, a "precise profile" may be disturbed by different conditions in the medium, and may be detected differently for different visibility conditions. We tolerate this interference in this link because it is precisely the information that contains visibility. If two "exact contours" are not obtained here, it is indicated that the frame data is incorrectly identified, or that the identification cannot be performed normally for some reason, such as abnormal acquisition of a certain video signal, or that a certain lens is blocked.
In the case where the above condition is not satisfied, the frame data is discarded, and the input of the next frame data is waited for and re-recognized. When the method is actually applied, if the situation that a plurality of frames continuously appear, an alarm is needed, a video signal is stored, and the inspection of service personnel is reserved.
The object segmentation of this step is the first step of image analysis, is the basis of computer vision, is an important component of image understanding, and is one of the most difficult problems in image processing. Image segmentation refers to dividing an image into a plurality of mutually disjoint regions according to characteristics such as gray scale, color, spatial texture, geometric shape and the like, so that the characteristics show consistency or similarity in the same region and obviously different regions. In a simple manner, the object is separated from the background in one image. The image segmentation allows for a substantial reduction in the amount of data to be processed in subsequent image analysis, advanced processing stages such as object recognition, while retaining information about the structural features of the image.
The target segmentation algorithm is mainly divided into: a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a depth learning-based segmentation method, and the like. The main process of the target segmentation algorithm adopted in the step comprises the following steps:
and step 201, performing convolutional neural network extraction on each frame image in the two paths of video signals of the video.
In the step, the definition of the image is considered to be changed along with the different parameters of the camera, so that a multi-scale characteristic extraction scheme, namely a characteristic pyramid network, is adopted. The feature pyramid network architecture is shown in fig. 2.
The feature pyramid network is divided into two parts. The left side structure is called a bottom-up structure that yields feature maps of different dimensions, such as C1 through C5 on the map. C1 to C5 are feature graphs with different scales respectively, and the feature graph is continuously reduced in size from bottom to top, which means that the dimension of the extracted feature is higher and higher. The shape is pyramid-shaped, thus becoming a feature pyramid network. The right side structure is called a top-down structure, and corresponds to each layer of features of the feature pyramid respectively, and arrows of feature processing connection at the same level are transversely connected between the two structures.
The purpose of this is that because the high-level features with smaller size have more semantic information, the low-level features with larger size have less semantic information but more position information, through the connection, the feature images of each layer are fused with the features with different resolutions and different semantic intensities, so that the detection effect can be improved when detecting objects with different resolutions.
The bottom-up architecture is shown in fig. 3, and the network architecture includes five stages, each for calculating a different size feature map, with a scaling step size of 2. The principle of generating a signature for each stage is shown in figures 4a-4 e. We use the C1, C2, C3, C4, C5 feature maps output for each stage to build a feature pyramid network structure.
The top-down architecture is shown on the right side of the pyramid network architecture in fig. 2. The high-level feature map with stronger semantic information is firstly up-sampled to obtain the same size as the low-level feature map. The feature patterns in the bottom-up and top-down structures, which have the same dimensions, are then laterally connected. The two feature maps are combined in an element addition manner. Finally, to reduce the aliasing effect caused by up-sampling, a convolution layer is added to each combined feature map to obtain a final feature map, namely P2, P3, P4 and P5.
Step 202, performing preliminary classification and regression by using the area extraction network.
The area extraction network structure is shown in fig. 5. Based on the feature graphs P2, P3, P4 and P5 obtained by the feature pyramid network, firstly, generating anchor frames of original graphs corresponding to each point on the feature graphs according to anchor frame generation rules, then inputting the P2, P3, P4 and P5 feature graphs into a region extraction network, wherein the region extraction network comprises a convolution layer and a full connection layer, and finally obtaining classification and regression results of each anchor frame, and specifically comprises foreground and background classification scores of each anchor frame and boundary frame coordinate correction information of each anchor frame. And finally, selecting an anchor frame meeting the foreground scoring condition according to the threshold value, correcting the boundary frame, and enabling the anchor frame after correction to be called as a candidate frame.
And 203, performing alignment operation on the candidate frame feature map.
And obtaining candidate frames meeting the score requirement through a region extraction network, and mapping the candidate frames back to the feature map. Obtaining the number of layers of the feature map corresponding to the candidate frame according to the following formula:
Where w represents the width of the candidate frame, h represents the height of the candidate frame, k represents the number of feature layer layers corresponding to the candidate frame, k 0 is the number of layers mapped when w, h=224, and generally takes 4, i.e. corresponds to the P4 layer. Then, the feature map corresponding to the candidate frame is obtained through a bilinear interpolation method, and the obtained feature map is consistent in size. The effect of the alignment operation on the feature map is shown in fig. 6.
And 204, classifying, regressing and dividing the target by using the convolutional neural network to obtain an extraction result of the target object.
The classification, regression, segmentation network structure is shown in fig. 7. And calculating the classification score and the coordinate offset of the candidate frame through classification and regression networks based on the obtained candidate frame feature map with the fixed size, and carrying out boundary frame correction on the candidate frame. And dividing the targets in the candidate frames through a dividing network. Finally, the classification, the bounding box regression and the segmentation result of the targets in the image can be obtained through a target segmentation algorithm, and then the extraction result of the targets is obtained.
And 300, performing feature matching on the obtained extraction result of the target object.
Through the object segmentation algorithm of step 200, two object contours are obtained, but the positions and angles of the two object contours in different video frames are different, which requires feature matching of the two object contours. Feature matching algorithms require feature comparison of two object contours to find the same point of the same object at different positions in the image. Because of the subsequent ranging algorithm, calculations have to be performed from a certain defined pixel point. In this step, in order to ensure that the same point is extracted as much as possible, the final result is determined by sampling and averaging multiple times. And the pixel position of the point in the different imaging is recorded. The method specifically comprises the following steps:
Step 301, extracting key points from the two object outlines.
The key points are some very prominent points which cannot disappear due to factors such as illumination, scale, rotation and the like, such as corner points, edge points, bright points of dark areas and dark points of bright areas. This step is to search for image locations on all scale spaces. Potential points of interest with scale and rotation invariance are identified by gaussian derivative functions.
And 302, positioning the key points of the obtained key points.
At each candidate location, the location and scale is determined by a fitting fine model. The choice of key points depends on their degree of stability.
And step 303, determining the feature vector of the key point according to the positioned key point.
One or more directions are assigned to each keypoint location based on the direction of the gradient of the image part. All subsequent operations on the image data are transformed with respect to the orientation, scale and position of the keypoints, providing invariance to these transformations.
Step 304, the key points are matched through the feature vectors of the key points.
And (3) comparing every two pairs of feature points through the feature vectors of the key points to find out a plurality of pairs of feature points matched with each other, and establishing the corresponding relation of the features among the objects. Finally, the distance between the key points can be calculated through the corresponding relation.
Step 400, obtaining the distance information of the target object by using a binocular distance measuring algorithm, and further obtaining the deviation between the detection distance and the actual distance of the target object.
A schematic diagram of the binocular distance algorithm is shown in fig. 8. As can be seen from fig. 8, the error of the ranging algorithm is affected by the measurement error of the distance between the left and right cameras, the measurement error of the focal length of the cameras, the measurement error of the vertical height difference between the cameras and the target object, and the like. These errors are unavoidable. However, the present step is not to measure the precise distance of the target object, but to establish the association relationship between the actual distance and the detection distance under the influence of different visibility conditions. Moreover, due to the existence of the follow-up neural network, the error generated by the step can be reduced in influence through the follow-up neural network. The output value of the ranging algorithm is a detected distance value (continuous value). The basic principle of binocular ranging is shown in fig. 9. The method specifically comprises the following steps:
and step 401, calibrating the binocular camera.
The camera has radial distortion due to the characteristics of an optical lens, and can be determined by three parameters k1, k2 and k3, wherein a radial distortion formula is :Xdr=X(1+k1×r2+k2×r4+k3×r6),Ydr=Y(1+k1×r2+k2×r4+k3×r6),r2=X2+Y2,, wherein (X, Y) is an undistorted image pixel point coordinate, and (X dr,Ydr) is a distorted image pixel point coordinate; because of assembly errors, the sensor of the camera is not completely parallel to the optical lens, so that tangential distortion exists in imaging, the tangential distortion can be determined by two parameters p1 and p2, a tangential distortion formula is :Xdt=2p1×X×Y+p2(r2+2×X2)+1,Ydt=2p1(r2+2×Y2)+2p2×X×Y+1,, wherein (X, Y) is undistorted image pixel coordinates, and (X dt,Ydt) is distorted image pixel coordinates. The calibration of a single camera is mainly to calculate the internal parameters (focal length f and imaging origins cx and cy, five distortion parameters (generally only k1, k2, p1 and p2 need to be calculated, and k3 need to be calculated only for radial distortion of fish glasses and the like, which is particularly large)) and the external parameters (world coordinates of calibration objects). The calibration of the binocular camera is not only to obtain the internal parameters of each camera, but also to measure the relative position between the two cameras (namely the rotation matrix R and the translation vector t of the right camera relative to the left camera) through calibration.
Step 402, binocular correction is performed on the binocular camera.
The binocular correction is to eliminate distortion and align lines of the left and right views according to monocular internal reference data (focal length, imaging origin, distortion coefficient) and binocular relative position relation (rotation matrix and translation vector) obtained after camera calibration, so that the imaging origin coordinates of the left and right views are consistent, the optical axes of the two cameras are parallel, the left and right imaging planes are coplanar, and the epipolar lines are aligned. Any point on one image must have the same line number as its corresponding point on the other image, and the corresponding point can be matched by only one-dimensional search in the line.
Step 403, performing binocular matching on the image acquired by the binocular camera.
The purpose of binocular matching is to match corresponding pixels of the same scene on left and right views, which is done in order to obtain parallax data.
Step 404, calculating depth information of the binocular-matched image to obtain distance information of the target object in the image.
P is a certain point on the object to be measured, L and R are the optical centers of the left and right cameras respectively, the imaging points of the point P on the photoreceptors of the two cameras are P and P '(the imaging planes of the cameras are placed in front of the lens after rotating), f represents the focal length of the cameras, b represents the center distance of the two cameras, z represents the distance between the required objects, and if the distance between P and P' is dis
dis=b-(XR-XL)
According to the triangle similarity principle:
The method can obtain:
in the formula, the focal length f and the center distance b of the camera can be obtained through calibration, so that the depth information can be obtained only by obtaining the value of X R-XL (namely parallax d). The disparity value may be calculated from matching keypoints in the second feature matching algorithm. Finally, the distance information of the target object in the image can be obtained through a binocular distance measuring algorithm, and further the deviation between the detection distance of the target object and the actual distance is obtained.
And 500, predicting the image quality visibility of each frame image in the two paths of video signals acquired by the binocular camera by using an image quality prediction visibility algorithm, and predicting the obtained visibility interval.
The image quality prediction visibility algorithm is a method for predicting visibility by using image macroscopic information, and mainly predicts the visibility based on the definition and contrast of objects in the medium background in the image. The link can not directly receive video frame signals, but needs to filter out near-field high-frequency information in images, extract low-frequency information of medium background, and then analyze and predict the images. In order to enable the present invention to accommodate the day and night and to improve the prediction accuracy in different situations, a large amount of video data, as well as the detection data of the visibility detector of the same time stamp, needs to be provided during the training process of the algorithm. The output of this step is the interval value (discrete value) of visibility. The method mainly comprises the following two steps:
Step 501, segmenting an image to realize identification and positioning of a target.
The image segmentation network structure is shown in fig. 10. The image is subjected to feature extraction through three convolution blocks, then two full-connection layers are connected to obtain classification scores and boundary frame positions of targets in the image, finally the highest score is selected as output, and the boundary frame most likely to exist the targets is extracted, so that the targets are identified and positioned.
Step 502, predicting the image visibility according to the target identification and positioning results to obtain an image classification result.
The image visibility prediction network structure is shown in fig. 11. Based on the image segmentation result, a predicted visibility image can be obtained. Because the image scene is complex, the network structure comprises three modules, and each module uses four different convolution kernels to extract the features of the images with different scales, so that the feature diversity is increased, and the classification accuracy is improved. And splicing and combining the extracted various features on the channel dimension at the output end of each module to obtain a multi-scale feature map. And finally, classifying the images through the full connection layer to obtain an image classification result, and realizing a predicted visibility interval.
And 600, predicting the visibility of the target visual effect on each frame image in the two paths of video signals acquired by the binocular camera by using a target visual effect prediction visibility algorithm, and predicting the obtained visibility interval.
The target visual effect prediction visibility algorithm is a method for predicting visibility by using image microscopic information, and mainly predicts the visibility based on the contour gradient, the contour integrity degree and the saturation of the color of a target object. The input of the link algorithm is the output of the target segmentation algorithm of step 200. In order to enable the present invention to accommodate the day and night and to improve the prediction accuracy in different situations, a large amount of video data, as well as the detection data of the visibility detector of the same time stamp, needs to be provided during the training process of the algorithm. The output of this step is the interval value (discrete value) of visibility. Step 600 specifically includes the following steps:
And step 601, constructing a target visual effect prediction visibility algorithm network structure.
The network structure of the target visual effect prediction visibility algorithm is shown in fig. 12. The target visual effect prediction visibility algorithm network structure comprises: the system comprises an input layer, a convolution layer, a first extraction feature module, a merging channel, a second extraction feature module, a merging channel, a full connection layer and a classification structure output layer; each extracted features module includes 5 convolution kernels. Based on the object segmentation algorithm of step 200, an image containing the object may be obtained. Because the interference of environmental noise in the target image is less, the network structure constructed by the step comprises two feature extraction modules, and each feature extraction module uses three different convolution kernels to extract features of different scales of the image, so that feature diversity is increased, and classification accuracy is improved.
Step 602, inputting the extraction result of the target object obtained in step 200 into a network structure of a target visual effect prediction visibility algorithm to obtain a multi-scale feature map.
Specifically, the extracted various features are spliced and combined on the channel dimension at the output end of each module, and a multi-scale feature map is obtained.
And 603, classifying the images through a target visual effect prediction visibility algorithm network structure to obtain a target image classification result, and realizing a predicted visibility interval.
Specifically, the image is classified through the full connection layer, and a target image classification result is obtained.
At step 700, final visibility prediction is performed using a visibility balance algorithm.
Through the processing of steps 200-600, three visibility-related results are obtained in a frame of video data: (1) The target object obtained by the ranging algorithm in step 400 detects a deviation (continuous value) of the distance from the actual distance. The occurrence of this deviation is strongly and directly linked to visibility. (2) In step 500, a visibility interval (discrete value) is obtained by using an image quality prediction visibility algorithm. (3) The visibility interval (discrete value) obtained by the visibility algorithm is predicted using the target visual effect in step 600.
For the balancing strategy of multiple calculation results, conventionally, a method of directly taking the average value or taking the average value after eliminating abnormal values is generally adopted. In order to further improve the detection accuracy, a multi-frame result cyclic verification method is adopted in the step. In a short time (e.g., 1 minute) compared with the visibility change speed, multi-frame data are obtained and calculated at a certain time interval (e.g., 5 seconds), and the detection result of each frame is input into a visibility balance algorithm according to the time sequence to obtain the final visibility interval value. Step 700 includes the following process:
step 701, constructing a visibility balance algorithm network structure, wherein the visibility balance algorithm network structure comprises an input layer, a circulating neural network, a full connection layer and a visibility interval output layer.
The visibility balance algorithm network structure is shown in fig. 13. As shown in fig. 13, the visibility balance algorithm network structure includes an input layer, a recurrent neural network, a full connection layer, and a visibility interval output layer. The visibility balance algorithm network inputs the visibility according to the time sequence, and the characteristic length of the visibility input by each time node is 3, which are respectively the deviation of the detection distance and the actual distance of the target object obtained by the ranging algorithm in step 400, the visibility interval obtained by the image quality prediction visibility algorithm in step 500 and the visibility interval obtained by the target visual effect prediction visibility algorithm in step 600.
And step 702, sequentially inputting the visibility into the recurrent neural network to obtain a result considering the time sequence.
The method and the device can balance multiple calculation results in the time dimension by utilizing the visibility balance algorithm, and can reduce the influence of single-frame calculation errors. However, the results obtained at different time stamps have a certain correlation before and after, i.e. the visibility change is not very severe in a short time. The time dimension can be utilized to correct for multiple detection values. The visibility balance is first handled using a recurrent neural network. The cyclic neural network is characterized in that each calculation needs to consider the result of the last calculation as a priori input, and the effect of correcting the subsequent calculation can be realized. After correction, calculation results of different time stamps are obtained, then a fully connected neural network is connected, and the calculation results are integrated for multiple times to obtain a final result. The recurrent neural network architecture is shown in fig. 14a-14 b. The recurrent neural network structure comprises: an input layer, a loop layer, and an output layer.
The recurrent neural network has the property of recursively learning in the order of the input data and can therefore be used to process data related to the sequence. It can be seen from the network structure that the recurrent neural network will memorize the previous information and use the previous information to influence the output of the following nodes. That is, the nodes between hidden layers of the recurrent neural network are connected, and the input of the hidden layers includes not only the output of the input layer but also the output of the hidden layer at the upper moment.
Given data x= { X 1,X2,…,Xt }, which is input in sequence, the characteristic length of X is c and the expansion length is t. The output h t of the cyclic neural network has the following calculation formula:
ht=tanh(W*Xt+W*ht-1)
wherein W is a hidden layer parameter and tanh is an activation function. As can be seen from the formula, the output at time t depends not only on the input X t at the current time but also on the output h t-1 at the previous time.
And step 703, connecting the output of the cyclic neural network with a full-connection layer to obtain a visibility interval value corresponding to the time sequence.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments is modified or some or all of the technical features are replaced equivalently, so that the essence of the corresponding technical scheme does not deviate from the scope of the technical scheme of the embodiments of the present invention.

Claims (3)

1. A method for identifying visibility of a medium, comprising the steps of:
step 100, acquiring video data of a target object through a binocular camera, and acquiring visibility data through a visibility tester to obtain two paths of video signals and a visibility signal;
Step 200, respectively extracting the positions of the target objects from two paths of video signals acquired by the binocular camera by using a target segmentation algorithm to obtain extraction results of the target objects;
Step 300, performing feature matching on the obtained extraction result of the target object;
Step 400, obtaining the distance information of the target object by using a binocular distance measuring algorithm, and further obtaining the deviation between the detection distance of the target object and the actual distance; step 400 includes the following steps 401 to 404:
step 401, calibrating a binocular camera;
Step 402, performing binocular correction on a binocular camera;
Step 403, performing binocular matching on the image acquired by the binocular camera;
Step 404, calculating depth information of the binocular-matched image to obtain distance information of a target object in the image;
step 500, predicting the image quality visibility of each frame image in two paths of video signals acquired by a binocular camera by using an image quality prediction visibility algorithm, and predicting the obtained visibility interval; step 500 includes the following steps 501 to 502:
Step 501, dividing an image to realize identification and positioning of a target;
step 502, predicting the visibility of the image according to the identification and positioning results of the target to obtain an image classification result;
step 600, predicting the visibility of the target visual effect for each frame image in two paths of video signals acquired by the binocular camera by using a target visual effect prediction visibility algorithm, and predicting the obtained visibility interval; step 600 includes the following steps 601 to 603:
Step 601, constructing a target visual effect prediction visibility algorithm network structure; the target visual effect prediction visibility algorithm network structure comprises: the system comprises an input layer, a convolution layer, a first extraction feature module, a merging channel, a second extraction feature module, a merging channel, a full connection layer and a classification structure output layer; wherein each extracted feature module comprises 5 convolution kernels;
step 602, inputting the extraction result of the target object obtained in the step 200 into a network structure of a target visual effect prediction visibility algorithm to obtain a multi-scale feature map;
step 603, classifying images through a target visual effect prediction visibility algorithm network structure to obtain a target image classification result, and realizing a predicted visibility interval;
Step 700, performing final visibility prediction by using a visibility balance algorithm; step 700 includes the following steps 701 through 703:
Step 701, constructing a visibility balance algorithm network structure, wherein the visibility balance algorithm network structure comprises an input layer, a circulating neural network, a full-connection layer and a visibility interval output layer;
Step 702, sequentially inputting the visibility into a cyclic neural network to obtain a result considering a time sequence;
And step 703, connecting the output of the cyclic neural network with a full-connection layer to obtain a visibility interval value corresponding to the time sequence.
2. The method of medium visibility identification according to claim 1, wherein step 200 comprises the following process:
Step 201, convolutional neural network extraction features are performed on each frame of image in two paths of video signals;
Step 202, performing preliminary classification and regression by using a region extraction network;
step 203, performing alignment operation on the candidate frame feature map;
And 204, classifying, regressing and dividing the target by using the convolutional neural network to obtain an extraction result of the target object.
3. The medium visibility identification method according to claim 1, wherein step 300 comprises the following procedure:
Step 301, extracting key points from two target contours;
step 302, positioning the key points;
Step 303, determining feature vectors of the key points according to the positioned key points;
step 304, the key points are matched through the feature vectors of the key points.
CN202010869849.8A 2020-08-26 2020-08-26 Medium visibility recognition method Active CN112017243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010869849.8A CN112017243B (en) 2020-08-26 2020-08-26 Medium visibility recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010869849.8A CN112017243B (en) 2020-08-26 2020-08-26 Medium visibility recognition method

Publications (2)

Publication Number Publication Date
CN112017243A CN112017243A (en) 2020-12-01
CN112017243B true CN112017243B (en) 2024-05-03

Family

ID=73502310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010869849.8A Active CN112017243B (en) 2020-08-26 2020-08-26 Medium visibility recognition method

Country Status (1)

Country Link
CN (1) CN112017243B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818748A (en) * 2020-12-31 2021-05-18 北京字节跳动网络技术有限公司 Method and device for determining plane in video, storage medium and electronic equipment
CN113658275A (en) * 2021-08-23 2021-11-16 深圳市商汤科技有限公司 Visibility value detection method, device, equipment and storage medium
CN115220029A (en) * 2021-11-25 2022-10-21 广州汽车集团股份有限公司 Visibility identification method and device and automobile
CN114202542B (en) * 2022-02-18 2022-04-19 象辑科技(武汉)股份有限公司 Visibility inversion method and device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382497A (en) * 2008-10-06 2009-03-11 南京大学 Visibility detecting method based on monitoring video of traffic condition
KR20110037180A (en) * 2009-10-06 2011-04-13 충주대학교 산학협력단 System and method for road visibility measurement using camera
WO2012100522A1 (en) * 2011-01-26 2012-08-02 南京大学 Ptz video visibility detection method based on luminance characteristic
CN104677330A (en) * 2013-11-29 2015-06-03 哈尔滨智晟天诚科技开发有限公司 Small binocular stereoscopic vision ranging system
CN108875794A (en) * 2018-05-25 2018-11-23 中国人民解放军国防科技大学 Image visibility detection method based on transfer learning
CN110020642A (en) * 2019-05-14 2019-07-16 江苏省气象服务中心 A kind of visibility recognition methods based on vehicle detection
CN110849807A (en) * 2019-11-22 2020-02-28 山东交通学院 Monitoring method and system suitable for road visibility based on deep learning
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 3D imaging method based on coded structured light and dual purposes
CN111191629A (en) * 2020-01-07 2020-05-22 中国人民解放军国防科技大学 Multi-target-based image visibility detection method
CN111428752A (en) * 2020-02-25 2020-07-17 南通大学 Visibility detection method based on infrared image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382497A (en) * 2008-10-06 2009-03-11 南京大学 Visibility detecting method based on monitoring video of traffic condition
KR20110037180A (en) * 2009-10-06 2011-04-13 충주대학교 산학협력단 System and method for road visibility measurement using camera
WO2012100522A1 (en) * 2011-01-26 2012-08-02 南京大学 Ptz video visibility detection method based on luminance characteristic
CN104677330A (en) * 2013-11-29 2015-06-03 哈尔滨智晟天诚科技开发有限公司 Small binocular stereoscopic vision ranging system
CN108875794A (en) * 2018-05-25 2018-11-23 中国人民解放军国防科技大学 Image visibility detection method based on transfer learning
CN110020642A (en) * 2019-05-14 2019-07-16 江苏省气象服务中心 A kind of visibility recognition methods based on vehicle detection
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 3D imaging method based on coded structured light and dual purposes
CN110849807A (en) * 2019-11-22 2020-02-28 山东交通学院 Monitoring method and system suitable for road visibility based on deep learning
CN111191629A (en) * 2020-01-07 2020-05-22 中国人民解放军国防科技大学 Multi-target-based image visibility detection method
CN111428752A (en) * 2020-02-25 2020-07-17 南通大学 Visibility detection method based on infrared image

Also Published As

Publication number Publication date
CN112017243A (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN112017243B (en) Medium visibility recognition method
Koch et al. Evaluation of cnn-based single-image depth estimation methods
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
CN111462128B (en) Pixel-level image segmentation system and method based on multi-mode spectrum image
CN108981672A (en) Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN109253722B (en) Monocular distance measuring system, method, equipment and storage medium fusing semantic segmentation
CN110288659B (en) Depth imaging and information acquisition method based on binocular vision
JP6858415B2 (en) Sea level measurement system, sea level measurement method and sea level measurement program
CN108491810A (en) Vehicle limit for height method and system based on background modeling and binocular vision
CN109657717A (en) A kind of heterologous image matching method based on multiple dimensioned close packed structure feature extraction
CN105787870A (en) Graphic image splicing fusion system
CN110929782A (en) River channel abnormity detection method based on orthophoto map comparison
Dhiman et al. A multi-frame stereo vision-based road profiling technique for distress analysis
CN114415173A (en) Fog-penetrating target identification method for high-robustness laser-vision fusion
Krotosky et al. Multimodal stereo image registration for pedestrian detection
CN109784229B (en) Composite identification method for ground building data fusion
CN113219472A (en) Distance measuring system and method
CN103688289A (en) Method and system for estimating a similarity between two binary images
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN112016558B (en) Medium visibility recognition method based on image quality
CN112014393B (en) Medium visibility recognition method based on target visual effect
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN114821035A (en) Distance parameter identification method for infrared temperature measurement equipment of power equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant