CN117367544A - Water level monitoring method, system, equipment and medium - Google Patents

Water level monitoring method, system, equipment and medium Download PDF

Info

Publication number
CN117367544A
CN117367544A CN202311139248.1A CN202311139248A CN117367544A CN 117367544 A CN117367544 A CN 117367544A CN 202311139248 A CN202311139248 A CN 202311139248A CN 117367544 A CN117367544 A CN 117367544A
Authority
CN
China
Prior art keywords
image
point cloud
water level
cloud data
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311139248.1A
Other languages
Chinese (zh)
Other versions
CN117367544B (en
Inventor
孙秀峰
刘智
陈亮雄
杨静学
张力澜
郭恒睿
涂强
钟小阳
高仁强
刘洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Research Institute of Water Resources and Hydropower
Original Assignee
Guangdong Research Institute of Water Resources and Hydropower
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Research Institute of Water Resources and Hydropower filed Critical Guangdong Research Institute of Water Resources and Hydropower
Priority to CN202311139248.1A priority Critical patent/CN117367544B/en
Priority claimed from CN202311139248.1A external-priority patent/CN117367544B/en
Publication of CN117367544A publication Critical patent/CN117367544A/en
Application granted granted Critical
Publication of CN117367544B publication Critical patent/CN117367544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01FMEASURING VOLUME, VOLUME FLOW, MASS FLOW OR LIQUID LEVEL; METERING BY VOLUME
    • G01F23/00Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm
    • G01F23/80Arrangements for signal processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01FMEASURING VOLUME, VOLUME FLOW, MASS FLOW OR LIQUID LEVEL; METERING BY VOLUME
    • G01F23/00Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm
    • G01F23/22Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm by measuring physical variables, other than linear dimensions, pressure or weight, dependent on the level to be measured, e.g. by difference of heat transfer of steam or water
    • G01F23/28Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm by measuring physical variables, other than linear dimensions, pressure or weight, dependent on the level to be measured, e.g. by difference of heat transfer of steam or water by measuring the variations of parameters of electromagnetic or acoustic waves applied directly to the liquid or fluent solid material
    • G01F23/284Electromagnetic waves
    • G01F23/292Light, e.g. infrared or ultraviolet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Fluid Mechanics (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Thermal Sciences (AREA)
  • Signal Processing (AREA)
  • Nonlinear Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a water level monitoring method, a system, equipment and a medium. The method comprises the steps of obtaining an initial water level elevation value and a reference plane; acquiring a first image and a second image through a binocular camera, wherein the first image is an image shot by a left-eye camera in the binocular camera, and the second image is an image shot by a right-eye camera in the binocular camera; performing point cloud extraction processing on the first image and the second image to obtain point cloud data; performing weight analysis on the point cloud data to obtain weight data of the point cloud data; and carrying out weight taking and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain a current water level elevation value. The method can effectively reduce the cost and the power consumption required by water level monitoring, reduce the scene limit of the water level monitoring and effectively improve the accuracy of measuring the water level. The invention relates to the technical field of machine vision.

Description

Water level monitoring method, system, equipment and medium
Technical Field
The invention relates to the technical field of machine vision, in particular to a water level monitoring method, a system, equipment and a medium.
Background
In recent years, with the continuous development of social technology, the more common is the scene of monitoring the water level of a water area based on a machine vision technology. The traditional monitoring of waters water level based on machine vision technique mainly has two kinds of monitoring modes, divide into staff gauge water level monitoring and staff-less water level monitoring.
Currently, for a monitoring mode of scaleless water level monitoring, the existing implementation modes have the following two modes:
the first implementation mode is an artificial water level line calibration method, namely, a manual mode is adopted, RTK and other measuring equipment are adopted to calibrate the heights of a plurality of mark points on the bank, then the water-bank boundary line is identified through a machine vision technology, and the change of the pixel position of the water level line in an image is converted into the change of the water level height under the real physical space by comparing with a pre-calibrated value, so that the water level is calculated. The method needs a large amount of field calibration work, and cannot be used under the condition that the river bank environment is complex and difficult to calibrate.
The second implementation mode is a structured light projection method, and the principle of the structured light projection method is that a preset pattern (such as a specific-shape light spot and a stripe) is projected to the water surface through a structured light emitting device, and then a water level value is obtained through detecting the change of the specific pattern (or ranging the pattern by using a multi-camera) under different water levels. However, the method has the advantages that the structural light generating device is added, the cost and the power consumption are higher, and the method can not realize accurate measurement of the water level and has lower accuracy under the conditions of high refractive index of the water surface, lower reflectivity, stronger interference of natural light such as sunlight and the like.
Accordingly, there is a need for solving and optimizing the problems associated with the prior art.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the related art to a certain extent.
Therefore, a first object of the embodiments of the present invention is to provide a water level monitoring method, which can effectively reduce cost and power consumption required for water level monitoring, reduce scene limitation of water level monitoring, and effectively improve accuracy of water level measurement.
A second object of embodiments of the present application is to provide a water level monitoring system.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the application comprises the following steps:
in a first aspect, an embodiment of the present application provides a water level monitoring method, including:
acquiring an initial water level elevation value and a reference plane;
acquiring a first image and a second image through a binocular camera, wherein the first image is an image shot by a left-eye camera in the binocular camera, and the second image is an image shot by a right-eye camera in the binocular camera;
performing point cloud extraction processing on the first image and the second image to obtain point cloud data;
performing weight analysis on the point cloud data to obtain weight data of the point cloud data;
and carrying out weight taking and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain a current water level elevation value.
In addition, the water level monitoring method according to the above embodiment of the present application may further have the following additional technical features:
further, in an embodiment of the present application, the acquiring the reference plane includes:
acquiring a third image and a fourth image through the binocular camera, wherein the third image is an image shot by a left-eye camera in the binocular camera, and the fourth image is an image shot by a right-eye camera in the binocular camera;
acquiring a pixel point pair according to the third image and the fourth image, wherein the pixel point pair is used for representing pixel points with a mutual matching relation between the third image and the fourth image, and the pixel point pair is used for representing a set of pixel coordinates of the third image and the fourth image;
performing distance processing on the pixel point pairs to obtain depth distances of the pixel points, and generating point cloud coordinates of the pixel points according to the depth distances and the pixel point pairs;
and determining the reference plane according to the point cloud coordinates of the pixel points.
Further, in an embodiment of the present application, the performing a point cloud extraction process on the first image and the second image to obtain point cloud data includes:
performing feature enhancement processing on the first image to obtain a first feature image, and performing feature enhancement processing on the second image to obtain a second feature image;
and carrying out similarity registration processing on a first image block in the first characteristic image according to a second image block in the second characteristic image, and obtaining the point cloud data according to the coordinates of the first image block and the coordinates of the second image block if the similarity of the first image block and the second image block is smaller than a preset first threshold value, wherein the size of the first image block is the same as that of the second image block.
Further, in an embodiment of the present application, the performing feature enhancement processing on the first image to obtain a first feature image includes:
cutting the first image to obtain a plurality of cut-out image blocks with equal sizes;
residual attention coding is carried out on all the clipping blocks, so that feature vectors of all the clipping blocks are obtained;
residual attention decoding is carried out on the feature vectors of all the cut image blocks to obtain a plurality of feature image blocks;
and splicing all the characteristic blocks to obtain the first characteristic image.
Further, in an embodiment of the present application, the obtaining the point cloud data according to the coordinates of the first tile and the coordinates of the second tile includes:
determining the monitoring range of the binocular camera according to a preset monitoring threshold value and the reference plane;
and screening the coordinates of the first image block and the coordinates of the second image block according to the monitoring range to obtain the point cloud data.
Further, in an embodiment of the present application, the performing weight analysis on the point cloud data to obtain weight data of the point cloud data includes:
sequentially performing coordinate conversion processing and normalization processing on the point cloud data to obtain first point cloud data;
inputting the first point cloud data to a point Yun Quanchong model to obtain an intermediate weight of the first point cloud data;
obtaining a first distance corresponding to the point cloud data according to the point cloud data and the reference plane;
and generating weight data of the point cloud data according to the first distance and the intermediate weight.
Further, in an embodiment of the present application, the weighting and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane, to obtain a current water level elevation value includes:
obtaining a second distance corresponding to the point cloud data according to the point cloud data and the reference plane;
performing multiply-accumulate operation on the second distance and the weight data to obtain a water level deviation value;
and obtaining the current water level elevation value according to the water level deviation value and the initial water level elevation value.
In a second aspect, embodiments of the present application provide a water level monitoring system, comprising:
the first acquisition module is used for acquiring an initial water level elevation value and a reference plane;
the second acquisition module is used for acquiring a first image and a second image through the binocular camera, wherein the first image is an image shot by a left-eye camera in the binocular camera, and the second image is an image shot by a right-eye camera in the binocular camera;
the first processing module is used for carrying out point cloud extraction processing on the first image and the second image to obtain point cloud data;
the analysis module is used for carrying out weight analysis on the point cloud data to obtain weight data of the point cloud data;
and the second processing module is used for carrying out weight taking and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain the current water level elevation value.
In a third aspect, embodiments of the present application further provide a computer device, including:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to implement the water level monitoring method described above.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium having stored therein a processor executable program for implementing the water level monitoring method described above when executed by the processor.
The advantages and benefits of the present application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the present application.
According to the water level monitoring method disclosed by the embodiment of the application, an initial water level elevation value and a reference plane are obtained; acquiring a first image and a second image through a binocular camera, wherein the first image is an image shot by a left-eye camera in the binocular camera, and the second image is an image shot by a right-eye camera in the binocular camera; performing point cloud extraction processing on the first image and the second image to obtain point cloud data; performing weight analysis on the point cloud data to obtain weight data of the point cloud data; and carrying out weight taking and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain a current water level elevation value. The water level monitoring method can effectively reduce cost and power consumption required by water level monitoring, reduce scene limitation of water level monitoring and effectively improve measurement accuracy of water level.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description is made with reference to the accompanying drawings of the embodiments of the present application or the related technical solutions in the prior art, and it should be understood that, in the following description, the drawings are only for convenience and clarity of expressing some of the embodiments in the technical solutions of the present application, and other drawings may be obtained according to the drawings without the need of inventive labor for those skilled in the art.
Fig. 1 is a schematic flow chart of a water level monitoring method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a framework of a feature enhancement process according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a framework of a point cloud weight model according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a water level monitoring system according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Currently, for a monitoring mode of scaleless water level monitoring, the existing implementation modes have the following two modes:
the first implementation mode is an artificial water level line calibration method, namely, a manual mode is adopted, RTK and other measuring equipment are adopted to calibrate the heights of a plurality of mark points on the bank, then the water-bank boundary line is identified through a machine vision technology, and the change of the pixel position of the water level line in an image is converted into the change of the water level height under the real physical space by comparing with a pre-calibrated value, so that the water level is calculated. The method needs a large amount of field calibration work, and cannot be used under the condition that the river bank environment is complex and difficult to calibrate.
The second implementation mode is a structured light projection method, and the principle of the structured light projection method is that a preset pattern (such as a specific-shape light spot and a stripe) is projected to the water surface through a structured light emitting device, and then a water level value is obtained through detecting the change of the specific pattern (or ranging the pattern by using a multi-camera) under different water levels. However, the method has the advantages that due to the fact that the structural light generating device is additionally arranged, cost and power consumption are higher, and when no commercial power is supplied to a measuring site, and under the conditions of high refractive index of the water surface, low reflectivity, strong interference of natural light such as sunlight and the like, accurate measurement of the water level cannot be achieved, and accuracy is low.
In view of the above, the embodiment of the invention provides a water level monitoring method, which effectively reduces the cost and power consumption required by water level monitoring, reduces the scene limit of water level monitoring, and effectively improves the accuracy of water level measurement.
Referring to fig. 1, in an embodiment of the present application, a water level monitoring method includes:
step 110, acquiring an initial water level elevation value and a reference plane;
the step 110 of obtaining the reference plane includes:
step 111, acquiring a third image and a fourth image through the binocular camera, wherein the third image is an image shot by a left-eye camera in the binocular camera, and the fourth image is an image shot by a right-eye camera in the binocular camera;
step 112, acquiring a pixel point pair according to the third image and the fourth image, wherein the pixel point pair is used for representing the pixel points with a mutual matching relationship between the third image and the fourth image, and the pixel coordinates of the third image and the set of the pixel coordinates of the fourth image are collected;
step 113, performing distance processing on the pixel point pair to obtain a depth distance of the pixel point, and generating a point cloud coordinate of the pixel point according to the depth distance and the pixel point pair;
and 114, determining the reference plane according to the point cloud coordinates of the pixel points.
In this embodiment of the present application, the initial water level elevation value may be a water level elevation value of a current water area obtained by measuring or referring to data by an instrument when the monitoring device is initially installed. The reference plane is the reference plane for monitoring the water level of the water area. Specifically, a third image and a fourth image can be obtained through a binocular camera, mutually matched pixel points are selected from the third image and the fourth image respectively, pixel point pairs are calculated according to a binocular distance principle, the distance (depth distance) between the pixel points and the binocular camera is obtained, then, the point cloud coordinates of the pixel points are generated according to the pixel point pairs corresponding to the depth distance and the depth distance, and then, a reference plane is determined through the point cloud coordinates of the three pixel points.
For example, in an actual environment, there is a pixel point a, where the point of the third image is P1 (u 1, v 1) and the point of the fourth image is Q1 (J1, K1), and the pixel point is a pixel point pair according to the point P1 and the point Q1, and then the distance processing is performed on the pixel point pair by the binocular distance principle, where the formula is as follows:
where z is the depth distance, b is the distance between the left-eye camera and the right-eye camera, f is the focal length of the binocular camera, and disparity is the difference in pixel positions of point P1 and point P2.
It should be noted that, the pixel position difference between the pixel point P1 and the pixel point P2 may be simply obtained by the pixel coordinates between the point P1 and the point P2, and the two parameters b and f may be found from the factory parameters of the binocular camera.
It can be understood that after the depth distance of a pixel is obtained, the depth distance of the pixel can be obtained according to a point cloud formula, and the pixel coordinates of the point P1 or the point P2 in the pixel pair generate the point cloud coordinates of the pixel, then, a reference plane can be determined by the point cloud coordinates of three pixels, and a space plane equation of the reference plane can be expressed as follows:
Ax+By+Cz+D=0
where x is the abscissa of the pixel, y is the ordinate of the pixel, z is the vertical of the pixel, A, B, C and D are known constants, and A, B, C are not zero at the same time.
Step 120, acquiring a first image and a second image through a binocular camera, wherein the first image is an image shot by a left-eye camera in the binocular camera, and the second image is an image shot by a right-eye camera in the binocular camera;
step 130, performing point cloud extraction processing on the first image and the second image to obtain point cloud data;
in the step, the point cloud extraction processing can be divided into two stages of feature enhancement and similarity registration, wherein the feature enhancement stage can eliminate noise affecting feature recognition in the image, and strengthen feature textures at the same time, so that the accuracy of feature extraction and matching of the first image and/or the second image is improved.
The step 130 of performing a point cloud extraction process on the first image and the second image to obtain point cloud data includes:
step 131, performing feature enhancement processing on the first image to obtain a first feature image, and performing feature enhancement processing on the second image to obtain a second feature image;
step 131, performing feature enhancement processing on the first image to obtain a first feature image, including:
step 1311, cutting the first image to obtain a plurality of cut-out tiles with equal sizes;
step 1312, performing residual attention coding on all the clipping blocks to obtain feature vectors of all the clipping blocks;
step 1313, performing residual attention decoding on the feature vectors of all the cut image blocks to obtain a plurality of feature image blocks;
and 1314, splicing all the feature blocks to obtain the first feature image.
Referring to fig. 2, in an embodiment of the present application, the feature enhancement process may be implemented by a feature enhancement model, the residual attention encoding may be implemented by a transducer encoder, and the residual attention decoding may be implemented by a transducer decoder. Specifically, the feature enhancement model firstly cuts a first image into a plurality of cut-out blocks with fixed sizes, and arranges and inputs the cut-out blocks into a transducer encoder according to the mapping relation between the cut-out blocks and the first image to obtain feature vectors of each cut-out block, then inputs the feature vectors of all the cut-out blocks into the transducer decoder to obtain feature blocks, and splices all the obtained feature blocks according to the mapping relation to obtain the first feature image. It will be appreciated that the step of performing feature enhancement processing on the second image is similar to the step of performing feature enhancement processing on the first image, and can be simply analogized, which is not described in detail herein by the applicant.
And 132, performing similarity registration processing on a first image block in the first characteristic image according to a second image block in the second characteristic image, and performing registration processing on coordinates of the first image block and coordinates of the second image block if the similarity between the first image block and the second image block is smaller than a preset first threshold value to obtain the point cloud data, wherein the size of the first image block is the same as the size of the second image block.
In this embodiment of the present application, a first image block in a first feature image may be used as a to-be-checked image block, then, a second image block in a second feature image is selected, and similarity registration processing is performed on the first image block through the second image block, when the similarity between the first image block and the second image block is smaller than a preset first threshold value, the first image block and the second image block are considered to be similar, and coordinates of the first image block and coordinates of the second image block are used as point cloud data.
In this embodiment of the present application, the similarity registration process may be implemented by using a transducer model, specifically, gray processing, binarizing processing, filtering processing, and normalizing processing may be performed on the first tile and the second tile respectively, then the token of the first tile and the token of the second tile are identified by using the transducer model, the mean square error of the token of the first tile and the token of the second tile is calculated, the obtained mean square error value is taken as the similarity of the first tile and the second tile, the smaller the similarity is, the more similar the first tile and the second tile are represented, and the first threshold may be set according to the actual requirement.
It should be noted that, when the first image block is taken as the image block to be checked, the second image block obtained by selection does not necessarily satisfy the condition that the similarity is smaller than the first threshold, so in this embodiment, when the similarity between the first image block and the second image block is greater than or equal to the first threshold, a new second image block may be generated by adjusting the selection position of the second feature image, or the similarity registration processing may be continued by adjusting the positions and sizes of the first image block and the second image block.
The step 132 of obtaining the point cloud data according to the coordinates of the first tile and the coordinates of the second tile includes:
step 1321, determining a monitoring range of the binocular camera according to a preset monitoring threshold and the reference plane;
and 1322, screening the coordinates of the first image block and the coordinates of the second image block according to the monitoring range to obtain the point cloud data.
It will be appreciated that the monitoring threshold includes a width threshold and a height threshold, which may be specifically set according to the shooting range of the binocular camera. Specifically, a center point O (x 0, y0, z 0) may be arbitrarily selected in the reference plane, where x0 is an abscissa of the center point O, y0 is an ordinate of the center point O, z0 is an ordinate of the center point O, and the monitoring range is determined by the width w and the height h in the monitoring threshold. Illustratively, in the embodiment of the present application, the monitoring range may be x ε [ x0-w/2, x0+w/2], y ε [ y0-h/2, y0+h/2], z ε [ z0-w/2, z0+w/2], where w is the width, h is the height, x is the abscissa of the monitoring range, y is the ordinate of the monitoring range, and z is the ordinate of the monitoring range. It should be noted that the examples of the present application are only illustrative, and are not intended to limit the present application in any way.
It can be understood that a certain mapping relationship exists between the coordinates of the first image block and the coordinates of the second image block and the reference plane, so that the coordinates of the first image block and the coordinates of the second image block can be screened through the monitoring range, and when the coordinates of the first image block and/or the coordinates of the second image block exceed the monitoring range after mapping, coordinate points beyond the monitoring range can be omitted, so that the calculation amount required by water level monitoring is reduced.
Step 140, performing weight analysis on the point cloud data to obtain weight data of the point cloud data;
step 140, performing weight analysis on the point cloud data to obtain weight data of the point cloud data, including:
step 141, sequentially performing coordinate conversion processing and normalization processing on the point cloud data to obtain first point cloud data;
step 142, inputting the first point cloud data to a point Yun Quanchong model to obtain an intermediate weight of the first point cloud data;
step 143, obtaining a first distance corresponding to the point cloud data according to the point cloud data and the reference plane;
and 144, generating weight data of the point cloud data according to the first distance and the intermediate weight.
In this embodiment of the present application, before point cloud data is input to a point cloud weight model, coordinate conversion processing may be performed first, an initial reference coordinate system of the point cloud data is converted into a reference coordinate system of a center point of a reference plane, normalization processing is performed on the point cloud data after the coordinate conversion processing, coordinate ranges of all the point cloud data are adjusted to be between [ -1,1] to obtain first point cloud data, and then the first point cloud data is identified through the point cloud weight model to obtain an intermediate weight of the first point cloud data. Referring to fig. 3, the point cloud weight model includes an INPUT layer (INPUT), a multi-layer perception layer (MLP), self-attention layers (SA 1, SA2, SA3 and SA 4), a stack layer (CONCAT) and a full connection layer (FC), first point cloud data is INPUT at the INPUT layer, features of each first point cloud data are expanded from 3 vectors to 128 vectors in the multi-layer perception layer, then features in the first point cloud data are extracted through four layers of self-attention layers, the extracted features of the respective layers of self-attention layers are fused in the stack layer, and finally global features of the first point cloud data are extracted through the full connection layer and intermediate weights are output.
It should be noted that, in the embodiment of the present application, not only the texture of the water surface of the water area is clearer through the feature enhancement processing, and the higher recognition effect can still be obtained under the conditions of low refractive index and high reflectivity of the water surface, but also the key point recognition (point cloud data recognition) and the point cloud weight of the natural texture of the water surface of the water area are combined through the similarity registration processing, so that the water level monitoring can be realized by recognizing the natural texture when the interference of sunlight and the like is stronger. It should be noted that when the interference of sunlight is strong, the refractive index of the water surface of the water area is high, the reflectivity is low, the texture manufactured by the structural light generating device is severely interfered by strong natural light, and the natural texture is strengthened along with the strong natural light.
It is understood that the first distance may be obtained from a distance formula between a certain point cloud in the point cloud data and the reference plane, where the formula is as follows:
where D is the first distance, A, B, C and D are known constants in the spatial plane equation of the reference plane, x is the abscissa of a certain point cloud in the point cloud data, y is the ordinate of a certain point cloud in the point cloud data, and z is the ordinate of a certain point cloud in the point cloud data.
In the embodiment of the present application, weight data may be generated according to weights corresponding to all point clouds in the point cloud data, where the weight of a certain point cloud in the point cloud data may be obtained by the following formula:
wherein w is i Weight, d, of ith point cloud in point cloud data i For the first distance corresponding to the ith point cloud in the point cloud data, n is the number of the point clouds contained in the point cloud data,for summing the first distances corresponding to the n point clouds.
And 150, performing weight taking and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain a current water level elevation value.
Step 150, performing weight extraction and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain a current water level elevation value, including:
step 151, obtaining a second distance corresponding to the point cloud data according to the point cloud data and the reference plane;
step 152, performing multiply-accumulate operation on the second distance and the weight data to obtain a water level deviation value;
and 153, obtaining the current water level elevation value according to the water level deviation value and the initial water level elevation value.
In this embodiment of the present application, the determination of the second distance may refer to the foregoing content about the first distance, which is not described herein in detail. Then, multiply-accumulate operation can be performed on the second distance and the weight data of each point cloud of the point cloud data to obtain a water level deviation value, and then the current water level elevation value is determined according to the water level deviation value and the initial water level elevation value, and the equivalent formulas of the step 152 and the step 153 are as follows:
wherein h is the current water level elevation value, h 0 As an initial elevation value of the water level,is the water level deviation value.
Referring to fig. 4, an embodiment of the present application further provides a water level monitoring system, which is characterized by comprising:
a first obtaining module 101, configured to obtain an initial water level elevation value and a reference plane;
a second obtaining module 102, configured to obtain a first image and a second image through a binocular camera, where the first image is an image captured by a left-eye camera in the binocular camera, and the second image is an image captured by a right-eye camera in the binocular camera;
a first processing module 103, configured to perform a point cloud extraction process on the first image and the second image, so as to obtain point cloud data;
the analysis module 104 is configured to perform weight analysis on the point cloud data to obtain weight data of the point cloud data;
and the second processing module 105 is configured to perform weight extraction and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane, so as to obtain a current water level elevation value. It can be understood that the content in the above method embodiment is applicable to the system embodiment, and the functions specifically implemented by the system embodiment are the same as those of the above method embodiment, and the achieved beneficial effects are the same as those of the above method embodiment.
Referring to fig. 5, an embodiment of the present application further provides a computer device, including:
at least one processor 201;
at least one memory 202 for storing at least one program;
the at least one program, when executed by the at least one processor 201, causes the at least one processor 201 to implement the method embodiments described above.
Similarly, it can be understood that the content in the above method embodiment is applicable to the embodiment of the present apparatus, and the functions specifically implemented by the embodiment of the present apparatus are the same as those of the embodiment of the foregoing method, and the achieved beneficial effects are the same as those achieved by the embodiment of the foregoing method.
The present embodiment also provides a computer readable storage medium, in which a program executable by the processor 201 is stored, the program executable by the processor 201 being configured to implement the above-mentioned method embodiments when executed by the processor 201.
Similarly, the content in the above method embodiment is applicable to the present computer-readable storage medium embodiment, and the functions specifically implemented by the present computer-readable storage medium embodiment are the same as those of the above method embodiment, and the beneficial effects achieved by the above method embodiment are the same as those achieved by the above method embodiment.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of this application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the present application is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features may be integrated in a single physical device and/or software module or one or more of the functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Thus, those of ordinary skill in the art will be able to implement the present application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the application, which is to be defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, descriptions of the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present application have been described in detail, the present application is not limited to the embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (10)

1. A water level monitoring method, characterized in that the water level monitoring method comprises:
acquiring an initial water level elevation value and a reference plane;
acquiring a first image and a second image through a binocular camera, wherein the first image is an image shot by a left-eye camera in the binocular camera, and the second image is an image shot by a right-eye camera in the binocular camera;
performing point cloud extraction processing on the first image and the second image to obtain point cloud data;
performing weight analysis on the point cloud data to obtain weight data of the point cloud data;
and carrying out weight taking and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain a current water level elevation value.
2. The water level monitoring method of claim 1, wherein the acquiring the reference plane comprises:
acquiring a third image and a fourth image through the binocular camera, wherein the third image is an image shot by a left-eye camera in the binocular camera, and the fourth image is an image shot by a right-eye camera in the binocular camera;
acquiring a pixel point pair according to the third image and the fourth image, wherein the pixel point pair is used for representing pixel points with a mutual matching relation between the third image and the fourth image, and the pixel point pair is used for representing a set of pixel coordinates of the third image and the fourth image;
performing distance processing on the pixel point pairs to obtain depth distances of the pixel points, and generating point cloud coordinates of the pixel points according to the depth distances and the pixel point pairs;
and determining the reference plane according to the point cloud coordinates of the pixel points.
3. The water level monitoring method according to claim 1, wherein the performing a point cloud extraction process on the first image and the second image to obtain point cloud data includes:
performing feature enhancement processing on the first image to obtain a first feature image, and performing feature enhancement processing on the second image to obtain a second feature image;
and carrying out similarity registration processing on a first image block in the first characteristic image according to a second image block in the second characteristic image, and obtaining the point cloud data according to the coordinates of the first image block and the coordinates of the second image block if the similarity of the first image block and the second image block is smaller than a preset first threshold value, wherein the size of the first image block is the same as that of the second image block.
4. A water level monitoring method according to claim 3, wherein the performing feature enhancement processing on the first image to obtain a first feature image includes:
cutting the first image to obtain a plurality of cut-out image blocks with equal sizes;
residual attention coding is carried out on all the clipping blocks, so that feature vectors of all the clipping blocks are obtained;
residual attention decoding is carried out on the feature vectors of all the cut image blocks to obtain a plurality of feature image blocks;
and splicing all the characteristic blocks to obtain the first characteristic image.
5. A water level monitoring method according to claim 3, wherein the obtaining the point cloud data according to the coordinates of the first tile and the coordinates of the second tile comprises:
determining the monitoring range of the binocular camera according to a preset monitoring threshold value and the reference plane;
and screening the coordinates of the first image block and the coordinates of the second image block according to the monitoring range to obtain the point cloud data.
6. The water level monitoring method according to claim 5, wherein the performing weight analysis on the point cloud data to obtain weight data of the point cloud data includes:
sequentially performing coordinate conversion processing and normalization processing on the point cloud data to obtain first point cloud data;
inputting the first point cloud data to a point Yun Quanchong model to obtain an intermediate weight of the first point cloud data;
obtaining a first distance corresponding to the point cloud data according to the point cloud data and the reference plane;
and generating weight data of the point cloud data according to the first distance and the intermediate weight.
7. The water level monitoring method according to claim 1, wherein the weighting and processing are performed according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain a current water level elevation value, and the method comprises:
obtaining a second distance corresponding to the point cloud data according to the point cloud data and the reference plane;
performing multiply-accumulate operation on the second distance and the weight data to obtain a water level deviation value;
and obtaining the current water level elevation value according to the water level deviation value and the initial water level elevation value.
8. A water level monitoring system, comprising:
the first acquisition module is used for acquiring an initial water level elevation value and a reference plane;
the second acquisition module is used for acquiring a first image and a second image through the binocular camera, wherein the first image is an image shot by a left-eye camera in the binocular camera, and the second image is an image shot by a right-eye camera in the binocular camera;
the first processing module is used for carrying out point cloud extraction processing on the first image and the second image to obtain point cloud data;
the analysis module is used for carrying out weight analysis on the point cloud data to obtain weight data of the point cloud data;
and the second processing module is used for carrying out weight taking and processing according to the initial water level elevation value, the point cloud data, the weight data and the reference plane to obtain the current water level elevation value.
9. A computer device, comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to implement the water level monitoring method as claimed in any one of claims 1-7.
10. A computer readable storage medium, in which a processor executable program is stored, characterized in that the processor executable program is for implementing the water level monitoring method according to any one of claims 1-7 when being executed by the processor.
CN202311139248.1A 2023-09-05 Water level monitoring method, system, equipment and medium Active CN117367544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311139248.1A CN117367544B (en) 2023-09-05 Water level monitoring method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311139248.1A CN117367544B (en) 2023-09-05 Water level monitoring method, system, equipment and medium

Publications (2)

Publication Number Publication Date
CN117367544A true CN117367544A (en) 2024-01-09
CN117367544B CN117367544B (en) 2024-06-25

Family

ID=

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101602471B1 (en) * 2014-10-01 2016-03-25 공간정보기술(주) River water level measurement and warning system.
CN109919993A (en) * 2019-03-12 2019-06-21 腾讯科技(深圳)有限公司 Parallax picture capturing method, device and equipment and control system
CN110136114A (en) * 2019-05-15 2019-08-16 厦门理工学院 A kind of wave measurement method, terminal device and storage medium
CN110763189A (en) * 2019-10-15 2020-02-07 哈尔滨工程大学 Sea wave elevation measurement experimental device and method based on binocular vision
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle
CN113819974A (en) * 2021-09-17 2021-12-21 河海大学 River water level visual measurement method without water gauge
CN114299346A (en) * 2022-01-05 2022-04-08 重庆大学 Point cloud identification method and system based on channel attention
CN114812736A (en) * 2022-04-14 2022-07-29 山西长河科技股份有限公司 Water level monitoring method, device, terminal and storage medium
CN115060343A (en) * 2022-06-08 2022-09-16 山东智洋上水信息技术有限公司 Point cloud-based river water level detection system, detection method and program product
CN116202434A (en) * 2022-12-30 2023-06-02 东南大学 Airport pavement snow thickness measuring method based on binocular stereoscopic vision

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101602471B1 (en) * 2014-10-01 2016-03-25 공간정보기술(주) River water level measurement and warning system.
CN109919993A (en) * 2019-03-12 2019-06-21 腾讯科技(深圳)有限公司 Parallax picture capturing method, device and equipment and control system
CN110136114A (en) * 2019-05-15 2019-08-16 厦门理工学院 A kind of wave measurement method, terminal device and storage medium
CN110763189A (en) * 2019-10-15 2020-02-07 哈尔滨工程大学 Sea wave elevation measurement experimental device and method based on binocular vision
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle
CN113819974A (en) * 2021-09-17 2021-12-21 河海大学 River water level visual measurement method without water gauge
CN114299346A (en) * 2022-01-05 2022-04-08 重庆大学 Point cloud identification method and system based on channel attention
CN114812736A (en) * 2022-04-14 2022-07-29 山西长河科技股份有限公司 Water level monitoring method, device, terminal and storage medium
CN115060343A (en) * 2022-06-08 2022-09-16 山东智洋上水信息技术有限公司 Point cloud-based river water level detection system, detection method and program product
CN116202434A (en) * 2022-12-30 2023-06-02 东南大学 Airport pavement snow thickness measuring method based on binocular stereoscopic vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
石晗耀;陶青川;: "基于双目视觉的水位测量算法", 现代计算机(专业版), no. 08, 15 March 2017 (2017-03-15), pages 55 - 58 *

Similar Documents

Publication Publication Date Title
Schilling et al. Trust your model: Light field depth estimation with inline occlusion handling
CN108846888B (en) Automatic extraction method for fine size information of ancient wood building components
Xu et al. Multi-scale geometric consistency guided and planar prior assisted multi-view stereo
CN108629812A (en) A kind of distance measuring method based on binocular camera
CN105517677A (en) Depth/disparity map post-processing method and apparatus
CN104867135A (en) High-precision stereo matching method based on guiding image guidance
CN111899353A (en) Three-dimensional scanning point cloud hole filling method based on generation countermeasure network
CN108596975A (en) A kind of Stereo Matching Algorithm for weak texture region
CN108550166B (en) Spatial target image matching method
CN116452644A (en) Three-dimensional point cloud registration method and device based on feature descriptors and storage medium
CN111126116A (en) Unmanned ship river channel garbage identification method and system
CN113887624A (en) Improved feature stereo matching method based on binocular vision
CN112365586A (en) 3D face modeling and stereo judging method and binocular 3D face modeling and stereo judging method of embedded platform
CN112258474A (en) Wall surface anomaly detection method and device
CN114022474A (en) Particle grading rapid detection method based on YOLO-V4
KR20230132686A (en) A method for damage identification and volume quantification of concrete pipes based on PointNet++ neural network
CN114842340A (en) Robot binocular stereoscopic vision obstacle sensing method and system
Guo et al. 2D to 3D convertion based on edge defocus and segmentation
CN111415305A (en) Method for recovering three-dimensional scene, computer-readable storage medium and unmanned aerial vehicle
CN117367544B (en) Water level monitoring method, system, equipment and medium
CN113128346B (en) Target identification method, system and device for crane construction site and storage medium
CN113744324A (en) Stereo matching method combining multiple similarity measures
CN117367544A (en) Water level monitoring method, system, equipment and medium
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
Zhu et al. Triangulation of well-defined points as a constraint for reliable image matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant