CN117496414A - Ship water gauge automatic measurement method based on deep learning - Google Patents

Ship water gauge automatic measurement method based on deep learning Download PDF

Info

Publication number
CN117496414A
CN117496414A CN202311754004.4A CN202311754004A CN117496414A CN 117496414 A CN117496414 A CN 117496414A CN 202311754004 A CN202311754004 A CN 202311754004A CN 117496414 A CN117496414 A CN 117496414A
Authority
CN
China
Prior art keywords
water gauge
water
value
deep learning
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311754004.4A
Other languages
Chinese (zh)
Inventor
孙宗康
叶华锋
毛奕升
李威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Electric Power Development Co ltd
Original Assignee
Guangdong Electric Power Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Electric Power Development Co ltd filed Critical Guangdong Electric Power Development Co ltd
Priority to CN202311754004.4A priority Critical patent/CN117496414A/en
Publication of CN117496414A publication Critical patent/CN117496414A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

A ship water gauge automatic measurement method based on deep learning comprises the following steps: 1. loading the water gauge video and performing frame extraction processing; 2. training a segmentation model and a target detection model by using a plurality of pictures, and performing frame extraction and reasoning on a photographed water gauge video by using the segmentation model to obtain a binarization gray level map; 3. performing edge detection on the binarized gray level image to obtain water line coordinates, drawing the line on the original image, and storing the line as a new original image; 4. reasoning the new original image by using the detection model to obtain a detection result image; 5. calculating according to coordinates of the character boundary boxes in the detection result graph to obtain a graduated scale; 6. calculating according to the scale and the water line coordinates to obtain the water scale value of the single graph; 7. and sequencing the water gauge values of all frames, and taking the average value of the middle third section as the final water gauge value of the water gauge video shot in real time. The invention realizes the automation of water gauge measurement, provides a foundation for weighing the water gauge, and is convenient for popularization in industries such as port trade and the like.

Description

Ship water gauge automatic measurement method based on deep learning
Technical Field
The invention relates to the technical field of weighing of a water gauge and machine deep learning, in particular to an automatic measuring method of a ship water gauge based on deep learning.
Background
In the cargo transaction process of the marine transport ship, the measurement of the load capacity of the transport ship is a very important link for both sides of the transaction, and the result of the load measurement can be used as the basis for commodity cargo trade settlement, claim processing, clearance tax calculation and the like. In the measurement of the weight of a freight ship, the water gauge weighing method is most widely used at home and abroad. The principle is that the water gauge data before and after loading and unloading the ship is measured to calculate the water displacement change of the ship, and then the weight of the ship for loading the ship is calculated by combining the weight of the ship materials such as fresh water, ballast water, fuel oil and the like. Currently, the most widely used water gauge measurement is a manual observation method, namely, a professional water gauge weighing staff climbs a gangway ladder or takes a boat to approach a water gauge line, and the draft value of the boat is obtained by observing the water gauge reading of the boat by eyes.
Therefore, in order to improve the safety and convenience of the ship load metering operation, ensure the efficient proceeding of cargo transactions, overcome the interference influence of the marine complex environment, and need more scientific and intelligent ship load metering means. In recent years, along with the rapid development and gradual maturation of unmanned aerial vehicle technology, unmanned aerial vehicles have been gradually applied to the fields of maritime supervision, port navigation, maritime search and rescue and the like. In the water gauge field of measuring, unmanned aerial vehicle relies on its high mobility's characteristic, can gather boats and ships water gauge image fast and long-range synchronous to mobile terminal device, effectively avoids all kinds of security risks because of manual observation mode leads to. Meanwhile, the image recognition and analysis technology is an important application in the field of artificial intelligence, and can accurately recognize the water gauge mark and conduct intelligent reading according to the dynamic image of the ship water gauge, so that the convenience and the high efficiency of ship load measurement are guaranteed.
At present, most shipping still mainly adopts a manual water gauge measuring mode to measure the ship load, and an intelligent algorithm is developed to avoid corresponding safety risks and reduce time and economic cost through technical innovation.
Disclosure of Invention
The invention provides an automatic measurement method of a ship water gauge based on deep learning, which aims to solve the technical problems of unsafe and low efficiency of the existing manual measurement method.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
the invention provides a ship water gauge automatic measurement method based on deep learning, which comprises the following steps:
s1, acquiring a section of water gauge video, loading the water gauge video and performing frame extraction processing;
s2, training a deep learning segmentation model and a deep learning target detection model by using a plurality of pictures formed after frame extraction processing, extracting frames of a water gauge video shot in real time by using the trained deep learning segmentation model, and reasoning the plurality of pictures after frame extraction to obtain a plurality of divided binary gray level pictures;
s3, carrying out edge detection on the binary gray level image by adopting a Canny operator to obtain water line coordinates, drawing the line on the original image, and storing the line as a new original image;
s4, reasoning by using the new original image of the deep learning target detection model trained in the S2 to obtain a detection result image with a plurality of character boundary boxes;
s5, calculating according to coordinates of a plurality of character boundary boxes in the detection result diagram to obtain a graduated scale;
s6, calculating according to the scale and the water line coordinates to obtain the water scale value of the single graph;
s7, circulating S2 to S6, sequentially calculating the water gauge values of all frames, sequencing the water gauge values of all frames, and taking the average value of the middle third section as the final water gauge value of the water gauge video shot in real time.
Further, the implementation manner of S1 is as follows:
firstly, shooting a section of water gauge video with specified duration through autonomous navigation of an unmanned aerial vehicle, sending the video to a processing end, loading the video by the processing end, taking a frame according to a set frame number, and storing pictures according to a sequence number.
Further, the step S2 specifically includes the following steps:
s21, marking a water body part in a picture formed after frame extraction processing, and manufacturing a data set;
s22, training the deep learning segmentation model and the deep learning target detection model by utilizing a data set to obtain a trained deep learning segmentation model and a trained deep learning target detection model, and deploying and applying the two models on a processing end;
s23, frame extraction is carried out on the water gauge video shot in real time, and reasoning is carried out on a plurality of pictures after frame extraction, so that a plurality of divided binary gray level images are obtained.
Further, the step S5 specifically includes the following steps:
s51, removing incomplete multiple character boundary boxes in the detection result diagram in a sequencing mode;
s52, taking the central point of the boundary frame of the residual character as a scale point, sequentially subtracting the y coordinates to obtain the number of pixels occupied by the distance between the two adjacent characters in the figure, and taking the average value as the number of pixels between the two adjacent characters in the figure; the y coordinate is the pixel coordinate in the vertical direction;
s53, according to the character spacing in the actual scene, solving the actual height h represented by one pixel value, wherein the actual height h represented by one pixel value is a graduated scale;
s54, saving the Y coordinate Y of the center point of the whole meter character frame, and determining the real height H of the point.
Further, the S52 is expressed by a formula, which is specifically as follows:
where i and j are the sequence numbers of the character bounding boxes,a y-coordinate value for the center point of the jth character bounding box; />For the y coordinate value of the center point of the ith character boundary frame, N is the total number of the character boundary frames, and N is the number of pixels of the interval between two vertically adjacent characters.
Further, the step S53 is specifically as follows:
in an actual scene, the height of the distance between two adjacent characters is 0.2 m, so the actual height h represented by a pixel value in the graph can be obtained by the following formula:
h=0.2/n。
further, the step S6 specifically includes the following steps:
s61, finding the frame coordinates of the rest character frames at the lowest position, and determining the x coordinates x of the left lower point and the right lower point of the frame 1 And x 2 And find the position x 1 And x 2 All the coordinate points of the water level line in the middle are taken as the average value of the y coordinates of all the points to be taken as the height W of the water level line in the graph y The method comprises the steps of carrying out a first treatment on the surface of the The x-coordinate is a horizontally arranged pixel coordinate perpendicular to the y-axis;
s62, subtracting the height W of the water line in the figure from the Y coordinate Y of the central point y And further obtaining the difference of the height pixel values between the water level line and the center point of the whole meter character frame, multiplying the difference by the real height H represented by one pixel value to obtain the real height distance, and finally subtracting the real height distance from the real height H of the point to obtain the final real height of the water level line, namely the water gauge value v.
Further, the S61 is expressed by a formula, which is specifically as follows:
wherein M is at x 1 And x 2 The number of coordinate points of the water line between the two coordinates, k is the serial number of the coordinate points, y k Y-coordinate value of kth coordinate point, W y Is the water line height in the figure.
Further, the S62 is expressed by a formula, which is specifically as follows:
v=H-h×(Y-W y )。
further, the step S7 specifically includes the following steps:
s71, circulating S2 to S6, sequentially calculating the water gauge value of the video frame and storing the water gauge value;
s72, in order to reduce the influence of sea waves, the water gauge values of all frames are ordered, the average value of the middle third section is taken as the final water gauge value of the video, and two visualized pictures closest to the final value are stored at the same time, so that the subsequent manual calibration is facilitated.
The invention has the beneficial effects that:
the invention provides a ship water gauge automatic measurement method based on deep learning, which comprises the steps of loading video and performing frame extraction processing; reasoning the pictures by using a deep learning segmentation model to obtain a water body segmentation binarization map; performing edge detection on the binary image by adopting a Canny operator to obtain a water line coordinate, and drawing the line on the original image; reasoning the pictures by using a deep learning detection model to obtain a detection result graph; calculating according to the coordinates of the character boundary frame to obtain a graduated scale; calculating according to the scale and the water line coordinates to obtain the water scale value of the single graph; and comparing the maximum value and the minimum value of the water gauge in the video frame, and taking the median value as the final water gauge value of the video. The invention realizes the automation of water gauge measurement, can effectively replace a manual measurement mode, reduces the time cost of measurement, is safer and more efficient than the traditional method adopting manual measurement, provides a basis for weighing the water gauge, can be further popularized and applied in industries such as port trade, customs and the like, and has larger economic and social benefits.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a water body segmentation result according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a character detection result according to an embodiment of the present invention.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many other different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Referring to fig. 1, an embodiment of the present application provides an automatic measurement method for a ship water gauge based on deep learning, including the following steps:
s1, acquiring a section of water gauge video, loading the water gauge video and performing frame extraction processing;
specifically, a section of water gauge video is photographed through unmanned aerial vehicle autonomous navigation at first, and is sent to a processing end, the processing end loads the video, every two frames are selected to be taken, and pictures are saved according to sequence numbers.
S2, training a deep learning segmentation model and a deep learning target detection model by using a plurality of pictures formed after frame extraction processing, extracting frames of a water gauge video shot in real time by using the trained deep learning segmentation model, and reasoning the plurality of pictures after frame extraction to obtain a plurality of divided binary gray level pictures;
in some embodiments, the step S2 specifically includes the following steps:
s21, marking a water body part in a picture formed after frame extraction processing, and manufacturing a data set;
s22, training a deep learning segmentation model (U2 NET) and a deep learning target detection model (YOLOv 5) by utilizing a data set to obtain a trained deep learning segmentation model and a trained deep learning target detection model, and deploying and applying the two models on a processing end;
s23, frame extraction is carried out on the water gauge video shot in real time, and reasoning is carried out on a plurality of pictures after frame extraction, so that a plurality of divided binary gray level images are obtained. Referring to fig. 2, the left graph is a real label graph, the right graph is a prediction graph, the white part is a water body part, and the black part is a hull part.
S3, carrying out edge detection on the binary gray level image by adopting a Canny operator to obtain water line coordinates, drawing the line on the original image, and storing the line as a new original image;
specifically, the Canny operator is a common edge detection algorithm, a proper edge line is obtained by adjusting a threshold value, coordinates of the edge line are obtained, corresponding water line coordinates are drawn on an original image according to the coordinates, and the water line coordinates are stored as a new original image for the next character detection reasoning.
S4, reasoning the new original image by using the deep learning target detection model trained in the S2 to obtain a detection result image with a plurality of character boundary boxes;
specifically, the trained deep learning target detection model is deployed and applied to a processing end, and the picture saved in the last step is inferred to obtain a character detection result diagram, please refer to fig. 3, wherein the content on the character frame represents predicted category information.
S5, calculating according to coordinates of a plurality of character boundary boxes in the detection result diagram to obtain a graduated scale;
in some embodiments, the step S5 specifically includes the following steps:
s51, removing incomplete multiple character boundary boxes in the detection result diagram in a sequencing mode;
specifically, the coordinates of the multiple character bounding boxes obtained in the previous step determine the scale, and incomplete character bounding boxes are removed through sorting because the situation that characters are incomplete but detected may occur.
S52, because the heights of the character boundary boxes are different, taking the central point of the residual character boundary box as a scale point, sequentially subtracting the y coordinates to obtain the number of pixels occupied by the distance between two adjacent characters in the figure, and taking the average value as the number of pixels between the two adjacent characters in the figure; the y coordinate is the pixel coordinate where the vertical direction is located, that is, the pixel coordinate where the upper left point in fig. 3 is the origin point downward;
the S52 is expressed by a formula, which is specifically as follows:
where i and j are the sequence numbers of the character bounding boxes,a y-coordinate value for the center point of the jth character bounding box; />For the y coordinate value of the center point of the ith character boundary frame, N is the total number of the character boundary frames, and N is the number of pixels of the interval between two vertically adjacent characters.
S53, according to the character spacing in the actual scene, solving the actual height h represented by one pixel value, wherein the actual height h represented by one pixel value is a graduated scale;
specifically, in an actual scene, the height of the distance between two adjacent characters is 0.2 meters, so the actual height h represented by a pixel value in the graph can be obtained by the following formula:
h=0.2/n。
s54, saving the Y coordinate Y of the center point of the whole meter character frame, and determining the real height H of the point. For example, the whole meter value is 15 meters, the true height H of the point is 15.05 meters.
S6, calculating according to the scale and the water line coordinates to obtain the water scale value of the single graph;
in some embodiments, the step S6 specifically includes the following steps:
s61, finding the frame coordinates of the rest character frames at the lowest position, and determining the x coordinates x of the left lower point and the right lower point of the frame 1 And x 2 And find the position x 1 And x 2 All the coordinate points of the water level line in the middle are taken as the average value of the y coordinates of all the points to be taken as the height W of the water level line in the graph y The method comprises the steps of carrying out a first treatment on the surface of the The x-coordinate is the pixel coordinate of the horizontal arrangement perpendicular to the y-axis, i.e. the upper left point in FIG. 3 is the origin horizontal directionRight pixel coordinates;
the S61 is expressed by a formula, which is specifically as follows:
wherein M is at x 1 And x 2 The number of coordinate points of the water line between the two coordinates, k is the serial number of the coordinate points, y k Y-coordinate value of kth coordinate point, W y Is the water line height in the figure.
S62, subtracting the height W of the water line in the figure from the Y coordinate Y of the central point y And further obtaining the difference of the height pixel values between the water level line and the center point of the whole meter character frame, multiplying the difference by the real height H represented by one pixel value to obtain the real height distance, and finally subtracting the real height distance from the real height H of the point to obtain the final real height of the water level line, namely the water gauge value v.
The S62 is expressed by a formula, which is specifically as follows:
v=H-h×(Y-W y )。
s7, circulating S3 to S6, sequentially calculating the water gauge values of all frames, sequencing the water gauge values of all frames, and taking the average value of the middle third section as the final water gauge value of the water gauge video shot in real time.
In some embodiments, the step S7 specifically includes the following steps:
s71, sequentially calculating the water gauge values of the video frames and storing the water gauge values;
s72, in order to reduce the influence of sea waves, the water gauge values of all frames are ordered, the average value of the middle third section is taken as the final water gauge value of the video, and two visualized pictures closest to the final value are stored at the same time, so that the subsequent manual calibration is facilitated.
The invention provides a ship water gauge automatic measurement method based on deep learning, which comprises the steps of loading video and performing frame extraction processing; reasoning the pictures by using a deep learning segmentation model to obtain a water body segmentation binarization map; performing edge detection on the binary image by adopting a Canny operator to obtain a water line coordinate, and drawing the line on the original image; reasoning the pictures by using a deep learning detection model to obtain a detection result graph; calculating according to the coordinates of the character boundary frame to obtain a graduated scale; calculating according to the scale and the water line coordinates to obtain the water scale value of the single graph; and comparing the maximum value and the minimum value of the water gauge in the video frame, and taking the median value as the final water gauge value of the video. The invention realizes the automation of water gauge measurement, can effectively replace a manual measurement mode, reduces the time cost of measurement, is safer and more efficient than the traditional method adopting manual measurement, provides a basis for weighing the water gauge, can be further popularized and applied in industries such as port trade, customs and the like, and has larger economic and social benefits.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Moreover, the technical solutions of the embodiments of the present invention may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the embodiments, and when the technical solutions are contradictory or cannot be implemented, it should be considered that the combination of the technical solutions does not exist, and is not within the scope of protection claimed by the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The automatic ship water gauge measuring method based on deep learning is characterized by comprising the following steps of:
s1, acquiring a section of water gauge video, loading the water gauge video and performing frame extraction processing;
s2, training a deep learning segmentation model and a deep learning target detection model by using a plurality of pictures formed after frame extraction processing, extracting frames of a water gauge video shot in real time by using the trained deep learning segmentation model, and reasoning the plurality of pictures after frame extraction to obtain a plurality of divided binary gray level pictures;
s3, carrying out edge detection on the binary gray level image by adopting a Canny operator to obtain water line coordinates, drawing the line on the original image, and storing the line as a new original image;
s4, reasoning the new original image by using the deep learning target detection model trained in the S2 to obtain a detection result image with a plurality of character boundary boxes;
s5, calculating according to coordinates of a plurality of character boundary boxes in the detection result diagram to obtain a graduated scale;
s6, calculating according to the scale and the water line coordinates to obtain the water scale value of the single graph;
s7, circulating S2 to S6, sequentially calculating the water gauge values of all frames, sequencing the water gauge values of all frames, and taking the average value of the middle third section as the final water gauge value of the water gauge video shot in real time.
2. The automatic measurement method of a ship water gauge according to claim 1, wherein the implementation manner of S1 is as follows:
firstly, shooting a section of water gauge video with specified duration through autonomous navigation of an unmanned aerial vehicle, sending the video to a processing end, loading the video by the processing end, taking a frame according to a set frame number, and storing pictures according to a sequence number.
3. The automatic measurement method of a ship water gauge according to claim 1, wherein the step S2 specifically comprises the following steps:
s21, marking a water body part in a picture formed after frame extraction processing, and manufacturing a data set;
s22, training the deep learning segmentation model and the deep learning target detection model by utilizing a data set to obtain a trained deep learning segmentation model and a trained deep learning target detection model, and deploying and applying the two models on a processing end;
s23, frame extraction is carried out on the water gauge video shot in real time, and reasoning is carried out on a plurality of pictures after frame extraction, so that a plurality of divided binary gray level images are obtained.
4. The automatic measurement method of a ship water gauge according to claim 1, wherein the step S5 specifically comprises the following steps:
s51, removing incomplete multiple character boundary boxes in the detection result diagram in a sequencing mode;
s52, taking the central point of the boundary frame of the residual character as a scale point, sequentially subtracting the y coordinates to obtain the number of pixels occupied by the distance between the two adjacent characters in the figure, and taking the average value as the number of pixels between the two adjacent characters in the figure; the y coordinate is the pixel coordinate in the vertical direction;
s53, solving the real height h represented by a pixel value according to the character spacing in the actual scene; the true height h represented by one pixel value is a graduated scale;
s54, saving the Y coordinate Y of the center point of the whole meter character frame, and determining the real height H of the point.
5. The automatic measurement method of a ship water gauge according to claim 4, wherein the S52 is expressed by a formula, specifically as follows:
wherein i and j are the sequence numbers of the character bounding boxes, C yj A y-coordinate value for the center point of the jth character bounding box; c (C) yi For the y coordinate value of the center point of the ith character boundary frame, N is the total number of the character boundary frames, and N is the number of pixels of the interval between two vertically adjacent characters.
6. The automatic measurement method of a ship water gauge according to claim 5, wherein the step S53 is specifically as follows:
in an actual scene, the height of the distance between two adjacent characters is 0.2 m, so the actual height h represented by a pixel value in the graph can be obtained by the following formula:
h=0.2/n。
7. the automatic measurement method of a ship water gauge according to claim 6, wherein the step S6 specifically comprises the following steps:
s61, finding the frame coordinates of the rest character frames at the lowest position, and determining the x coordinates x of the left lower point and the right lower point of the frame 1 And x 2 And find the position x 1 And x 2 All the coordinate points of the water level line in the middle are taken as the average value of the y coordinates of all the points to be taken as the height W of the water level line in the graph y The method comprises the steps of carrying out a first treatment on the surface of the The x-coordinate is a horizontally arranged pixel coordinate perpendicular to the y-axis;
s62, subtracting the height W of the water line in the figure from the Y coordinate Y of the central point y And further obtaining the difference of the height pixel values between the water level line and the center point of the whole meter character frame, multiplying the difference by the real height H represented by one pixel value to obtain the real height distance, and finally subtracting the real height distance from the real height H of the point to obtain the final real height of the water level line, namely the water gauge value v.
8. The automatic measurement method of a ship water gauge according to claim 7, wherein S61 is represented by a formula, specifically as follows:
wherein M is at x 1 And x 2 The number of coordinate points of the water line between the two coordinates, k is the serial number of the coordinate points, y k Y-coordinate value of kth coordinate point, W y Is the water line height in the figure.
9. The automatic measurement method of a ship water gauge according to claim 8, wherein S62 is represented by a formula, specifically as follows:
10. the automatic measurement method of a ship water gauge according to claim 1, wherein the step S7 specifically comprises the following steps:
s71, circulating S2 to S6, sequentially calculating the water gauge value of the video frame and storing the water gauge value;
s72, in order to reduce the influence of sea waves, the water gauge values of all frames are ordered, the average value of the middle third section is taken as the final water gauge value of the video, and two visualized pictures closest to the final value are stored at the same time, so that the subsequent manual calibration is facilitated.
CN202311754004.4A 2023-12-19 2023-12-19 Ship water gauge automatic measurement method based on deep learning Pending CN117496414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311754004.4A CN117496414A (en) 2023-12-19 2023-12-19 Ship water gauge automatic measurement method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311754004.4A CN117496414A (en) 2023-12-19 2023-12-19 Ship water gauge automatic measurement method based on deep learning

Publications (1)

Publication Number Publication Date
CN117496414A true CN117496414A (en) 2024-02-02

Family

ID=89669272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311754004.4A Pending CN117496414A (en) 2023-12-19 2023-12-19 Ship water gauge automatic measurement method based on deep learning

Country Status (1)

Country Link
CN (1) CN117496414A (en)

Similar Documents

Publication Publication Date Title
CN102975826A (en) Portable ship water gauge automatic detection and identification method based on machine vision
CN111476120B (en) Unmanned aerial vehicle intelligent ship water gauge identification method and device
CN111598952B (en) Multi-scale cooperative target design and online detection identification method and system
CN112330593A (en) Building surface crack detection method based on deep learning network
CN111507976A (en) Defect detection method and system based on multi-angle imaging
CN111626169B (en) Image-based railway dangerous falling rock size judgment method
CN111145120A (en) Visibility detection method and device, computer equipment and storage medium
CN110619328A (en) Intelligent ship water gauge reading identification method based on image processing and deep learning
CN111723632B (en) Ship tracking method and system based on twin network
CN115147723A (en) Inland ship identification and distance measurement method, system, medium, equipment and terminal
CN112580600A (en) Dust concentration detection method and device, computer equipment and storage medium
CN117037132A (en) Ship water gauge reading detection and identification method based on machine vision
CN112183470A (en) Ship water gauge identification method and equipment and storage medium
CN116434230A (en) Ship water gauge reading method under complex environment
CN116824570A (en) Draught detection method based on deep learning
CN108681702B (en) Method and system for determining loading and unloading stowage decibel information of container
CN114565824A (en) Single-stage rotating ship detection method based on full convolution network
CN114627160A (en) Underwater environment detection method
CN113344148A (en) Marine ship target identification method based on deep learning
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN116935369A (en) Ship water gauge reading method and system based on computer vision
CN117496414A (en) Ship water gauge automatic measurement method based on deep learning
CN115165027A (en) Water gauge monitoring method and system based on unmanned aerial vehicle, electronic equipment and medium
CN115984219A (en) Product surface defect detection method and device, electronic equipment and storage medium
CN114972335A (en) Image classification method and device for industrial detection and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination