CN112132884B - Sea cucumber length measurement method and system based on parallel laser and semantic segmentation - Google Patents

Sea cucumber length measurement method and system based on parallel laser and semantic segmentation Download PDF

Info

Publication number
CN112132884B
CN112132884B CN202011054510.9A CN202011054510A CN112132884B CN 112132884 B CN112132884 B CN 112132884B CN 202011054510 A CN202011054510 A CN 202011054510A CN 112132884 B CN112132884 B CN 112132884B
Authority
CN
China
Prior art keywords
sea cucumber
laser
semantic segmentation
image
pixel distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011054510.9A
Other languages
Chinese (zh)
Other versions
CN112132884A (en
Inventor
俞智斌
张心亮
曾慧敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202011054510.9A priority Critical patent/CN112132884B/en
Publication of CN112132884A publication Critical patent/CN112132884A/en
Application granted granted Critical
Publication of CN112132884B publication Critical patent/CN112132884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of sea cucumber length measurement, and discloses a sea cucumber length measurement method and system based on parallel laser and semantic segmentation, wherein the method comprises the following steps: emitting two parallel visible laser beams to irradiate the front, forming laser points on the surface of a front object, and enabling a camera to capture a front image in real time; clipping the front image to obtain a pixel distance D between laser points p And a central region of the laser spot; semantic segmentation is carried out on the central area to obtain a segmented image containing the approximate contour of the sea cucumber, and the axis direction of the sea cucumber is determined based on the segmented image to obtain the pixel distance L of the sea cucumber in the axis direction p The method comprises the steps of carrying out a first treatment on the surface of the According to the pixel distance D between the laser spots p Actual distance D between laser beams r Pixel distance L in sea cucumber axis direction p Calculating the real length of sea cucumber
Figure DDA0002710530970000011
The invention provides a non-contact sea cucumber measurement system and a non-contact sea cucumber measurement method, which are used for carrying out semantic segmentation based on a Mask R-CNN model by means of a pair of parallel lasers, so that the measurement process is quick and convenient, and the cost is low.

Description

Sea cucumber length measurement method and system based on parallel laser and semantic segmentation
Technical Field
The invention relates to the technical field of sea cucumber length measurement, in particular to a sea cucumber length measurement method and system based on parallel laser and semantic segmentation.
Background
Sea cucumber measurement is a basic requirement of aquaculture. In situ non-contact measurement greatly reduces cost and harmlessness, making this technique an ideal choice for aquaculture monitoring. To develop an in-situ noncontact measurement system, two challenges are faced. One challenge is the lack of referent objects under non-contact conditions. Another challenge is how to detect and locate sea cucumbers in an underwater environment.
Disclosure of Invention
The invention provides a sea cucumber length measurement method and system based on parallel laser and semantic segmentation, which solve the technical problems that: how to detect and position sea cucumber in a non-contact manner in an underwater environment so as to measure the length of the sea cucumber more simply, rapidly and accurately.
In order to solve the technical problems, the invention provides a sea cucumber length measurement method based on parallel laser and semantic segmentation, which comprises the following steps:
s1, emitting two parallel visible laser beams to irradiate the front, forming laser points on the surface of a front object, and enabling a camera to capture a front image in real time;
s2, cutting the front image to obtain a pixel distance D between laser points p And a central region of the laser spot;
s3, carrying out semantic segmentation on the central region to obtain a segmented image containing the approximate contour of the sea cucumber, determining the axis direction of the sea cucumber based on the segmented image, and obtaining the pixel distance L of the sea cucumber in the axis direction p
S4, according to the pixel distance D between the laser points p Actual distance D between laser beams r Pixel distance L in sea cucumber axis direction p Calculating the real length of sea cucumber
Figure BDA0002710530950000021
Further, the step S2 specifically includes the steps of:
s21, separating the G channel of the front image and converting the G channel into a binary image with a preset threshold;
s22, taking the outline with the largest area and the next largest area in the binary image as laser points, determining the centers of the two laser points, and further calculating the distance between the two centers, namely the pixel distance D between the laser points p
S23, using the pixel distance D between the laser points p And determining a rectangle for the length and the preset height as the width as a clipping range, and clipping the front image to obtain a central area of the laser spot.
Further, in the step S21, the setting range of the preset threshold is 250±3; in the step S23, the preset height is 300±50 pixels.
Further, in the step S21, the preset threshold is 250; in the step S23, the preset height is 300 pixels.
Further, in the step S22, the area of each contour is calculated using the contourArea () function in OpenCV.
Further, in the step S1, the emitted laser beam is green light.
Further, in the step S3, a Mask R-CNN model-based deep learning network model is adopted to perform semantic segmentation on the central region; and acquiring an circumscribed rectangle of the sea cucumber outline in the segmented image by adopting a minArea () function in OpenCV, and further determining the axis direction of the sea cucumber by the circumscribed rectangle.
Further, the deep learning network model is trained by taking NMS in a backbone network Res-50-FPN, a background cross-over ratio of 0.3, a foreground cross-over ratio of 0.7, a weight attenuation strategy of a Mask R-CNN model and a Mask R-CNN model ROI as thresholds, and iterating for tens of thousands of times.
Further, training used a 1080Ti GPU with a single card, iterated a total of 900,000 times, with the batch size set to 2.
Based on the method, the invention also provides a sea cucumber length measurement system based on parallel laser and semantic segmentation, which comprises a local cutting module, a segmentation module and a calculation module which are respectively used for executing steps S2, S3 and S4 in the method.
The sea cucumber length measuring method and system based on parallel laser and semantic segmentation provided by the invention have the beneficial effects that:
(1) Providing a distance reference using two parallel laser beams to avoid contact with sea cucumbers;
(2) Based on a Mask R-CNN model, constructing a deep learning network model and training to detect and segment sea cucumbers in a central region, wherein the contours of the sea cucumbers in the obtained segmented images are more real;
(3) By pixel distance D between laser spots p Actual distance D between laser beams r Pixel distance L in sea cucumber axis direction p Calculating the real length of sea cucumber
Figure BDA0002710530950000031
The accuracy is good;
(4) The length of the sea cucumber is estimated at a relatively low cost, and the accuracy of the result is effectively verified by a large number of experiments.
Drawings
FIG. 1 is a 3D diagram of a laser frame used in a sea cucumber length measurement system based on parallel laser and semantic segmentation provided by an embodiment of the invention;
FIG. 2 is a block diagram of a sea cucumber length measurement system based on parallel laser and semantic segmentation provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of an image processing process of a local clipping module in a sea cucumber length measurement system based on parallel laser and semantic segmentation provided by an embodiment of the invention;
FIG. 4 is a schematic diagram of a labeling process of Mask-RCNN data sets used in the sea cucumber length measurement system based on parallel laser and semantic segmentation according to the embodiment of the invention;
FIG. 5 is a graph showing a comparison of the splitting effect of the sea cucumber length measuring system and method based on parallel laser and semantic splitting provided by the embodiment of the invention;
fig. 6 is a schematic diagram of an image processing process of a sea cucumber length measurement system and method based on parallel laser and semantic segmentation according to an embodiment of the present invention.
Detailed Description
The following examples are given for the purpose of illustration only and are not to be construed as limiting the invention, including the drawings for reference and description only, and are not to be construed as limiting the scope of the invention as many variations thereof are possible without departing from the spirit and scope of the invention.
The variety of sea cucumbers has been of great concern because of their great economic value. In order to protect the ecological environment, many countries use the minimum legal fishing size as a means of supervision for the sea cucumber fishing industry. Minimum fishing size limitations are also required to meet the economic demands of the fishery. To some extent, the size of sea cucumbers determines their economic value. These demands have forced attention to the measurement of sea cucumber length. Since 1991, researchers have noted the benefits and difficulties of measuring the size of live sea cucumbers. With the increasing maturity of deep learning technology, vision-based sea cucumber measurement technology has also been developed. Multiple sea cucumber tracking and localization methods have also been proposed that benefit from in situ data sets. Recently, with the rapid development of deep learning technology, in-situ sea cucumber detection methods are also proposed. The embodiment aims at providing an in-situ sea cucumber length measuring method and system based on a deep learning method and applied to a paired light laser.
Underwater laser processing is widely used in underwater engineering including cutting, welding, electroplating, and the like. The underwater laser system has the advantages of light weight, small focal spot and the like. Thus, many researchers have applied lasers to underwater 3D reconstruction and laser imaging. On the other hand, it is also impossible to apply the underwater laser system to a long distance detection task due to scattering and attenuation of light. Recovering and enhancing source signals from turbid media is another area of investigation that is relatively fire-explosive.
Due to numerous achievements of deep learning in the computer field, the embodiment adopts the deep learning to locate and divide sea cucumbers. However, unlike the computationally expensive and time consuming underwater laser processing method, the present embodiment first proposes a simple and fast measurement system based on a pair of portable lasers.
In order to determine an underwater in-situ length measurement device, the embodiment requires a reference to measure the length of the sea cucumber. At the same time, the underwater reference should also have sufficient accuracy and portability. In view of the above, the present embodiment employs a pair of parallel green laser beams as a reference standard. To secure the laser to the underwater robot, this embodiment employs a clamping bracket to secure the laser. The 3D schematic view of the laser stand is shown in fig. 1, after the laser is fixed, the seam is then sealed with AB glue, and then the laser is fixed on the underwater robot together with the stand with screws.
As shown in fig. 2, the laser and underwater robot based measurement system is mainly composed of 3 parts: the device comprises a local clipping module, a segmentation module and a length calculation module.
(1) Local cutting module
The local clipping module is responsible for clipping out the central area of the laser points and calculating the pixel distance between the laser points. In the system of this embodiment, clipping is a necessary part to pick out pixels between laser beams, reducing unnecessary computation overhead in the subsequent sea cucumber segmentation link.
After capturing a real-time image as in fig. 3 (a), the G channel is separated and converted into a binary image with a threshold of 250 (which can be set to a range of 250±3). From the binary map a series of coordinates approximating the outline of the green laser spot can be obtained. The series of coordinates may include the approximate contour coordinates of two or more objects due to background noise and interference from other objects, as shown in fig. 3 (b). To solve this problem, the contourArea () function in OpenCV needs to be called first to calculate the region of each contour; considering that the profile of the impurity is relatively small, as shown in fig. 3 (c), these regions are sorted by area, and the first 2 profiles with the largest area are selected as laser points.
By means of the two obtained profiles, the centers of the two laser points can be found, and the distance between the two points (the pixel distance D between the laser points is calculated p ) As shown in fig. 3 (d). Once fixed on the underwater robot, the true distance between the laser points can be measured. Then, the rectangle shown in fig. 1 is used to cut out the range between the two laser points, and the cut-out image is sent to the segmentation module. The width of the rectangle is equal to the distance between the two laser points, and the height of the rectangle is fixed to 300 pixels (which can be set in a range of 300±50).
(2) Segmentation module
The module aims to acquire the axial length of the sea cucumber by calculating the number of pixels in the axial direction of the sea cucumber in the image. In order to achieve this objective, sea cucumber is first segmented from the background to obtain a segmented image comprising the approximate contours of sea cucumber. The model used in the segmentation module is based on He et al [1] Mask R-CNN proposed in 2017A deep learning network model suitable for the present embodiment is built. The segmentation result of each sample consists of a series of pixel point coordinates which approximate the outline of the sea cucumber and form a closed curve. Then, a minArea () function in OpenCV is adopted to obtain the circumscribed rectangle of the outline, the axis direction of the sea cucumber is determined, and the pixel distance L of the sea cucumber in the axis direction can be further determined p
Training process for deep learning network model. The embodiment first refers to the COCO2014 data set [2] Mask R-CNN model with pretrained thereon [1] Fine tuning is performed to adapt to the task of sea cucumber segmentation. The backbone network of the model is Res-50-FPN [3] . The background IoU (cross-over) and foreground IoU are 0.3 and 0.7, respectively. In the embodiment, a default weight attenuation strategy of a Mask R-CNN model and NMS in the ROI are adopted as thresholds. Training uses a 1080Ti GPU with a single card, and the network model iterates a total of 900,000 times, with the batch size set to 2.
(3) Length calculation module
Knowing the true distance D between the laser beams r Distance D between laser points p . Pixel distance L in sea cucumber axis direction p Can calculate L r Is the real length of sea cucumber. The principle of the calculation is that the ratio of the pixel distance in the axis direction of the sea cucumber to the real length of the sea cucumber is equal to the ratio of the pixel distance between laser points to the real distance between laser beams, namely:
Figure BDA0002710530950000061
therefore, the real length of the sea cucumber should be:
Figure BDA0002710530950000062
based on the measurement system, the embodiment also provides a sea cucumber length measurement method based on parallel laser and semantic segmentation, which comprises the following steps:
s1, emitting two parallel green laser beams to irradiate the front, forming laser points on the surface of a front object, and enabling a camera to capture a front image in real time;
s2, cutting the front image to obtain a pixel distance D between laser points p And a central region of the laser spot;
s3, performing semantic segmentation on the central region by adopting a deep learning network model based on a Mask R-CNN model to obtain a segmented image containing the approximate outline of the sea cucumber, acquiring an external rectangle of the outline of the sea cucumber in the segmented image by adopting a minArea () function in OpenCV, and further determining the axis direction of the sea cucumber by the external rectangle to obtain the pixel distance L of the sea cucumber in the axis direction p
S4, according to the pixel distance D between the laser points p Actual distance D between laser beams r Pixel distance L in sea cucumber axis direction p Calculating the real length of sea cucumber
Figure BDA0002710530950000071
Further, the step S2 specifically includes the steps of:
s21, separating the G channel of the front image and converting the G channel into a binary image with a preset threshold;
s22, calculating the area of each contour in the binary image by adopting a contourArea () function in OpenCV, using the contour with the largest area and the largest area in the binary image as laser points, determining the centers of the two laser points, and further calculating the distance between the two centers, namely the pixel distance D between the laser points p
S23, using the pixel distance D between the laser points p And determining a rectangle for the length and the preset height as the width as a clipping range, and clipping the front image to obtain a central area of the laser spot.
Further, in the step S21, the setting range of the preset threshold is 250±3, and the preferred embodiment is 250; in the step S23, the preset height is 300±50 pixels, and the preferred embodiment is 300 pixels.
In the step S3, the deep learning network model trains with the backbone network Res-50-FPN, the background cross-over ratio of 0.3, the foreground cross-over ratio of 0.7, the weight attenuation policy of the Mask R-CNN model, and the NMS in the Mask R-CNN model ROI as threshold values, trains 1080Ti GPUs using a single card, iterates 900,000 times in total, and the batch size is set to 2.
In the step S3, a minArea () function in OpenCV is used to obtain an circumscribed rectangle of the sea cucumber outline in the segmented image, and the axis direction of the sea cucumber is further determined by the circumscribed rectangle.
In addition, the reason why the parallel laser beam of the present embodiment selects the green light beam is that after the G channel of the front image is separated, the green light has a higher brightness, so that the preset threshold value can keep the profile with a higher brightness, facilitating the determination of the pixel distance between the laser points.
It should also be noted that in other embodiments, in a system or method:
implementing the deep learning network model may be based on other transformations made by the Mask R-CNN model, may employ other background to foreground to cross 0.3, may employ other GPUs to iterate, may iterate fewer or more times, etc., all of which may be appropriate transformations based on the disclosure of the present embodiment;
the contourArea () function in OpenCV employed to calculate the area of each contour in the binary image may be replaced with other functions to calculate the area of the contour area;
the minArea () function in OpenCV used for obtaining the circumscribed rectangle of the sea cucumber outline in the segmented image can be replaced by other functions for calculating the area of the outline area;
the selected laser beam can be light with other colors, namely light with obvious distinction from other image areas after R channel, G channel or B channel separation is carried out on the front image, and the position and the outline of the laser spot are further determined by setting a proper threshold value.
The sea cucumber length measuring method and system based on parallel laser and semantic segmentation provided by the invention have the beneficial effects that:
(1) Providing a distance reference using two parallel laser beams to avoid contact with sea cucumbers;
(2) Based on a Mask R-CNN model, constructing a deep learning network model and training to detect and segment sea cucumbers in a central region, wherein the contours of the sea cucumbers in the obtained segmented images are more real;
(3) By pixel distance D between laser spots p Actual distance D between laser beams r Pixel distance L in sea cucumber axis direction p Calculating the real length of sea cucumber
Figure BDA0002710530950000091
The accuracy is good;
(4) The length of the sea cucumber is estimated at a relatively low cost, and the accuracy of the result is effectively verified by a large number of experiments.
The system and method described in this embodiment are experimentally verified as follows.
(1) Mask R-CNN dataset
Since there is no disclosed sea cucumber image segmentation dataset, the present embodiment manually labels 193 datasets in COCO2014 format and divides the 193 datasets into 20 validation sets and 173 training sets. There are only two label categories in this dataset: sea cucumber and background. 29 of these 193 data sets come from the URPC race [4] 164 pictures were taken in the experimental pool of this example. Of the 20 pictures in the validation set, 9 from the URPC race and 20 from the experimental pool. Two rules are followed when labeling a dataset: a) The outline of the sea cucumber is close to the edge of the sea cucumber as much as possible; b) The contour of sea cucumber does not include the spurs of sea cucumber. For this embodiment a visual illustration of fig. 4 is provided, wherein the left side (a) is the experimental pool shot and the right side (b) is from the URPC race. In the embodiment, ioU of Mask R-CNN is set to 0.5, the accuracy of image segmentation is 0.84, and the accuracy of a positioning frame is 0.832.
(2) Data set of a measurement system
In order to evaluate the measurement system and method, the embodiment collects 7 video segments in the experimental pool and cuts the video segments into frames as the data set, however, not all frames are available, because many frames do not contain sea cucumber images, or sea cucumber is not in the middle of the laser spot, or sea cucumber cannot be detected by Mask R-CNN. Therefore, the embodiment extracts images which contain sea cucumbers and which are in the middle of the laser spot and can be detected by Mask R-CNN as the final data set of the embodiment. This embodiment uses table 1 to illustrate how to extract a valid frame. When extracting valid frame frames, three principles are followed: firstly, sea cucumber is contained in an image; secondly, sea cucumber is positioned in the middle of the laser point; finally, the image cropped by the region cropping module can be detected by Mask R-CNN. Only if these three conditions are met, the frame is considered to be a valid frame.
TABLE 1 examples of active frames
Rules of 1 2 3 4 ……
Comprises sea cucumber Is that Is that Is that Whether or not ……
In the middle of the laser spot Is that Is that Whether or not Whether or not ……
Can be detected Is that Whether or not Whether or not Whether or not ……
Whether or not it is a valid frame Is that Whether or not Whether or not Whether or not ……
For each video, the present embodiment classifies the extracted effective frames into far, middle, and near 3 types according to the distance between an ROV (underwater robot) and a sea cucumber. According to the pixel distance D between the laser spots p The ratio to the image width defines the distance, middle and near. The video resolution of this embodiment is 1920 x 1080, so the ratio is D p /1920. The far, middle and near 3 types are defined as: d (D) p 1920 is less than or equal to 0.5, and is far away; d is 0.5 < p 1920 is less than or equal to 0.7; d is 0.7 < p 1920 is less than or equal to 1, and is near. The real length, effective frame and distance of sea cucumber in each video are recorded in table 2.
TABLE 2 data set of measurement system
Figure BDA0002710530950000101
(3) Method comparison of segmentation modules
The present embodiment uses the average error and variance as the evaluation index. In the segmentation module, the embodiment tries 2 methods to obtain the pixel length L of sea cucumber p : (1) The distance between the two farthest points in the sea cucumber outline is calculated by using a convex hull function in OpenCV and is used as the pixel length of the sea cucumber. (2) The minimum circumscribed rectangle of the sea cucumber outline is obtained by using the minAreRect () function in OpenCV, and the length of the circumscribed rectangle is used as the pixel length of the sea cucumber. The present embodiment first calculates the average result, average error and average variance for each video segment. Then, the effective frames of each video are used as weights to obtain the weighted average error and the weighted average variance of the two methods. The experimental results are recorded in table 3. From the experimental results, it can be seen that method (2) is significantly more accurate and stable than method (1).
TABLE 3 comparison of methods of partitioning modules
Figure BDA0002710530950000111
Note that: the corner mark '1' represents a method (1) for dividing the module, and the corner mark '2' represents a method (2); the "×" indicates that only the measurement results of the effective frames are calculated, as in tables 4 and 5.
(4) Method comparison of region clipping modules
In the region clipping module, this embodiment attempts another simpler and coarser method of locating the laser spot. A minMaxLoc () function is provided in OpenCV, which returns the position of the pixel point where the first luminance value is maximum. The method has only three steps: 1) Obtaining a first maximum brightness point by using a minMaxLoc () function; 2) Setting 50 pixels around the point to zero; 3) The second brightest pixel is found by minMaxLoc (). Table 4 was obtained based on this method. One benefit of this approach is that more active frames can be obtained, but from experimental results it can be seen that the variance is larger than table 3, although the average error of this clipping approach is smaller than table 3. In practice, the clipping method has also been found to have a great randomness and instability. It can still be seen from table 4 that method (2) is better than method (1), i.e. both error and variance are lower, at the segmentation module.
TABLE 4 method comparison of split modules based on another method of locating laser spots
Figure BDA0002710530950000121
(5) Results and discussion
The visual output results of the various modules of the measurement system of the present embodiment are recorded in fig. 6. First, the ROV of the present embodiment will take an image 6 (a) of the sea cucumber. The region clipping module will then locate the laser spot and clip the middle portion of the laser spot to get the image 6 (b). The cropped image 6 (c) is used as input to a segmentation module to obtain the semantic segmentation result 6 (d) of the sea cucumber. Finally, the length calculating module of the present embodiment calculates the actual length according to the segmented contour of the sea cucumber, as shown in fig. 6 (e).
In practice this embodiment has found that due to the relatively complex underwater environment, it is difficult for the ROV to maintain a stable attitude, which makes the distance between the sea cucumber and the ROV constantly variable. While different distances may result in different measurements. Thus, the present embodiment classifies the effective frames into far, middle and near 3 classes. The experimental results are recorded in table 5. From the experimental results it can be found that the influence of the distance on the error is not very large, but the closer the distance, the smaller the variance, which means that the result of the close range measurement is more stable.
TABLE 5 measurement results at different distances
Distance of Error of 1 Variance of 1 Error of 2 Variance of 2 Active frame
Far distance 11.09% 78.03 8.42% 83.49 361
In (a) 11.31% 107.71 9.45% 101.724 1520
Near-to-near 11.18% 65.312 9.59% 64.516 2599
In summary, the embodiment provides a non-contact sea cucumber measurement system and a non-contact sea cucumber measurement method, which are used for carrying out semantic segmentation based on a Mask R-CNN model by means of a pair of portable parallel lasers, so that the measurement process is quick and convenient, and the cost is low. Experimental results in an underwater environment show that the method and the system for protecting the underwater environment have robustness in length measurement and good performance in detection and segmentation.
Reference is made to:
[1]K.He,G.Gkioxari,P.Dollár,and R.Girshick.Mask r-cnn.In Proceedings of the IEEE international con-ference on computer vision,pages 2961–2969,2017.
[2]T.-Y.Lin,M.Maire,S.Belongie,J.Hays,P.Perona,D.Ramanan,P.Dollár,and C.L.Zitnick.Microsoft coco:Common objects in context.In European con-ference on computer vision,pages 740–755.Springer,2014.
[3]T.-Y.Lin,P.Dollár,R.Girshick,K.He,B.Hariharan,and S.Belongie.Feature pyramid networks for object detection.In Proceedings of the IEEE conference on computer vision and pattern recognition,pages 2117–2125,2017.
[4]Urpc competition.http://http://en.cnurpc.org.
the above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (3)

1. The sea cucumber length measurement method based on parallel laser and semantic segmentation is characterized by comprising the following steps:
s1, emitting two parallel visible laser beams to irradiate the front, forming laser points on the surface of a front object, and enabling a camera to capture a front image in real time; in the step S1, the emitted laser beam is green light;
s2, calculating the pixel distance D between laser points p Cutting the front image to obtain a central area of a laser spot; the step S2 specifically includes the steps of:
s21, separating the G channel of the front image, and converting the G channel into a binary image of a preset threshold, wherein the setting range of the preset threshold is 250+/-3;
s22, taking the outline with the largest area and the next largest area in the binary image as laser points, determining the centers of the two laser points, and further calculating the distance between the two centers, namely the pixel between the laser pointsDistance D p The method comprises the steps of carrying out a first treatment on the surface of the Calculating the area of each contour by adopting a contourArea () function in OpenCV;
s23, using the pixel distance D between the laser points p Determining a rectangle as a cutting range for the length and the preset height as the width, and cutting the front image to obtain a central area of the laser point, wherein the preset height is 300+/-50 pixels;
s3, carrying out semantic segmentation on the central region to obtain a segmented image containing the approximate contour of the sea cucumber, determining the axis direction of the sea cucumber based on the segmented image, and obtaining the pixel distance L of the sea cucumber in the axis direction p The method comprises the steps of carrying out a first treatment on the surface of the In the step S3, a Mask R-CNN model-based deep learning network model is adopted to carry out semantic segmentation on the central region; acquiring an external rectangle of sea cucumber contours in the segmented image by adopting a minArea () function in OpenCV, and further determining the axis direction of the sea cucumber by the external rectangle; the deep learning network model is trained by taking a weight attenuation strategy of a backbone network Res-50-FPN, a background cross-over ratio of 0.3, a foreground cross-over ratio of 0.7 and a Mask R-CNN model and NMS in a Mask R-CNN model ROI as threshold values, and iterating for tens of thousands of times; training 1080Ti GPU with single card, iterating 900,000 times in total, and setting the batch size to 2;
s4, according to the pixel distance D between the laser points p Actual distance D between laser beams r Pixel distance L in sea cucumber axis direction p Calculating the real length of sea cucumber
Figure FDF0000024379040000011
2. The sea cucumber length measurement method based on parallel laser and semantic segmentation as claimed in claim 1, wherein: in the step S21, the preset threshold is 250; in the step S23, the preset height is 300 pixels.
3. Sea cucumber length measurement system based on parallel laser and semantic segmentation, its characterized in that: comprising a local clipping module, a segmentation module and a length calculation module for executing the steps S2, S3 and S4, respectively, according to any of the claims 1-2.
CN202011054510.9A 2020-09-29 2020-09-29 Sea cucumber length measurement method and system based on parallel laser and semantic segmentation Active CN112132884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011054510.9A CN112132884B (en) 2020-09-29 2020-09-29 Sea cucumber length measurement method and system based on parallel laser and semantic segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011054510.9A CN112132884B (en) 2020-09-29 2020-09-29 Sea cucumber length measurement method and system based on parallel laser and semantic segmentation

Publications (2)

Publication Number Publication Date
CN112132884A CN112132884A (en) 2020-12-25
CN112132884B true CN112132884B (en) 2023-05-05

Family

ID=73843222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011054510.9A Active CN112132884B (en) 2020-09-29 2020-09-29 Sea cucumber length measurement method and system based on parallel laser and semantic segmentation

Country Status (1)

Country Link
CN (1) CN112132884B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344936A (en) * 2021-07-02 2021-09-03 吉林农业大学 Soil nematode image segmentation and width measurement method based on deep learning
CN117854116B (en) * 2024-03-08 2024-05-17 中国海洋大学 Sea cucumber in-situ length measurement method based on Bezier curve

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738207A (en) * 2019-09-10 2020-01-31 西南交通大学 character detection method for fusing character area edge information in character image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107928675A (en) * 2017-11-22 2018-04-20 王华锋 A kind of trunk measuring method being combined based on deep learning and red dot laser
CN110147763B (en) * 2019-05-20 2023-02-24 哈尔滨工业大学 Video semantic segmentation method based on convolutional neural network
CN110738642A (en) * 2019-10-08 2020-01-31 福建船政交通职业学院 Mask R-CNN-based reinforced concrete crack identification and measurement method and storage medium
CN110838100A (en) * 2019-10-11 2020-02-25 浙江大学 Colonoscope pathological section screening and segmenting system based on sliding window
CN111652179B (en) * 2020-06-15 2024-01-09 东风汽车股份有限公司 Semantic high-precision map construction and positioning method based on point-line feature fusion laser

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738207A (en) * 2019-09-10 2020-01-31 西南交通大学 character detection method for fusing character area edge information in character image

Also Published As

Publication number Publication date
CN112132884A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
US11763485B1 (en) Deep learning based robot target recognition and motion detection method, storage medium and apparatus
US11488308B2 (en) Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN110687904B (en) Visual navigation routing inspection and obstacle avoidance method for inspection robot
CN113240626B (en) Glass cover plate concave-convex type flaw detection and classification method based on neural network
WO2016066038A1 (en) Image body extracting method and system
US20080166016A1 (en) Fast Method of Object Detection by Statistical Template Matching
CN108197604A (en) Fast face positioning and tracing method based on embedded device
CN112132884B (en) Sea cucumber length measurement method and system based on parallel laser and semantic segmentation
CN109460754A (en) A kind of water surface foreign matter detecting method, device, equipment and storage medium
US12039441B2 (en) Methods and systems for crack detection using a fully convolutional network
Trouvé et al. Single image local blur identification
CN111626941A (en) Document correction method based on deep learning semantic segmentation
CN109063669B (en) Bridge area ship navigation situation analysis method and device based on image recognition
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN115953550A (en) Point cloud outlier rejection system and method for line structured light scanning
CN109671084B (en) Method for measuring shape of workpiece
CN117037049B (en) Image content detection method and system based on YOLOv5 deep learning
CN111667429B (en) Target positioning correction method for inspection robot
CN112991159A (en) Face illumination quality evaluation method, system, server and computer readable medium
CN113781523A (en) Football detection tracking method and device, electronic equipment and storage medium
CN113808202A (en) Multi-target detection and space positioning method and system thereof
CN112818983A (en) Method for judging character inversion by using picture acquaintance
Juujarvi et al. Digital-image-based tree measurement for forest inventory
CN114724190A (en) Mood recognition method based on pet posture
US20230245445A1 (en) An object detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant