CN112598660A - Automatic detection method for pulp cargo quantity in wharf loading and unloading process - Google Patents

Automatic detection method for pulp cargo quantity in wharf loading and unloading process Download PDF

Info

Publication number
CN112598660A
CN112598660A CN202011590292.0A CN202011590292A CN112598660A CN 112598660 A CN112598660 A CN 112598660A CN 202011590292 A CN202011590292 A CN 202011590292A CN 112598660 A CN112598660 A CN 112598660A
Authority
CN
China
Prior art keywords
pulp
line segment
candidate
sample
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011590292.0A
Other languages
Chinese (zh)
Other versions
CN112598660B (en
Inventor
耿增涛
李全喜
张子青
李宁孝
陆兵
乔善青
石雪琳
王国栋
李新照
郭振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Ocean Shipping Tally Co ltd
Port Of Qingdao Technology Co ltd
Qingdao University
Original Assignee
Qingdao Ocean Shipping Tally Co ltd
Port Of Qingdao Technology Co ltd
Qingdao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Ocean Shipping Tally Co ltd, Port Of Qingdao Technology Co ltd, Qingdao University filed Critical Qingdao Ocean Shipping Tally Co ltd
Priority to CN202011590292.0A priority Critical patent/CN112598660B/en
Publication of CN112598660A publication Critical patent/CN112598660A/en
Application granted granted Critical
Publication of CN112598660B publication Critical patent/CN112598660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic detection method for the quantity of paper pulp cargos in the loading and unloading process of a wharf, which comprises the following steps: s1: extracting a video stream of a transport vehicle loaded with pulp goods in real time at a certain frequency; s2: processing the extracted video image to segment a characteristic diagram of the paper pulp cargo part; s3: processing the characteristic diagram to obtain candidate connection points; s4: extracting a series of candidate line segment samples according to the candidate connecting points; s5: filtering the candidate line segment samples; s6: and counting the number of the candidate line segment samples to obtain the number of the pulp cargos. The invention can automatically detect the quantity of paper pulp goods, reduce the task amount and improve the detection efficiency, has no manual participation, has high intelligent degree and improves the detection accuracy.

Description

Automatic detection method for pulp cargo quantity in wharf loading and unloading process
Technical Field
The invention relates to the technical field of computer vision and deep learning, in particular to an automatic detection method for the quantity of paper pulp cargos in a wharf loading and unloading process.
Background
Currently, wharf business is flourishing and pulp goods need to be transported by means of transport (e.g. flatbed trucks) during loading and unloading at the wharf. The paper pulp goods have the characteristics of one bundle and one package, the paper pulp goods need to be checked manually in the process of loading and unloading the paper pulp goods, the number of the paper pulp goods needs to be counted by paper records after manual counting is carried out, and a truck can be released at the rear part.
The manual counting work has the advantages of large task amount, labor cost increase, low efficiency and error counting quantity caused by manual fatigue or negligence.
Therefore, an intelligent and automatic method for detecting the quantity of the pulp goods is needed, so that the counting efficiency and the counting accuracy are improved while the task amount is reduced.
Disclosure of Invention
The embodiment of the invention provides an automatic detection method for the quantity of paper pulp cargos in the wharf loading and unloading process.
In order to realize the purpose of the invention, the invention is realized by adopting the following technical scheme:
the application relates to a method for automatically detecting the quantity of paper pulp cargos in a wharf loading and unloading process, which is characterized by comprising the following steps of:
s1: extracting a video stream of a transport vehicle loaded with pulp goods in real time at a certain frequency;
s2: processing the extracted video image to segment a characteristic diagram of the paper pulp cargo part;
s3: processing the characteristic diagram to obtain candidate connection points;
s4: extracting a series of candidate line segment samples according to the candidate connecting points;
s5: filtering the candidate line segment samples;
s6: and counting the number of the candidate line segment samples to obtain the number of the pulp cargos.
In the present application, step S2 includes the following:
s21: inputting the extracted video image into a residual error network to obtain a characteristic image;
s22: processing the characteristic image, and extracting a rectangular candidate frame for distinguishing a foreground from a background;
s23: mapping the extracted rectangular candidate frame into the feature image, and unifying the window size of the rectangular candidate frame by using a regional feature aggregation technology;
s24: and segmenting the characteristic image into a characteristic map of the pulp cargo part according to the coordinate information of the rectangular candidate frame.
In the present application, the S2 further includes the following steps after S23 and before S24: s23': and performing boundary frame regression on the rectangular candidate frame after the unified window in the step S23 to correct the coordinate information of the rectangular candidate frame.
In the present application, step S3 includes the steps of:
s31: carrying out mesh division on the feature map to form M mesh units Wx×Hx
S32: sequentially inputting the feature maps into a convolutional layer and a classification layer for processing to calculate the confidence of each grid unit, and converting the feature maps processed by the convolutional layer into a connection point offset feature map O (in the following steps of (1))x),
Figure DEST_PATH_IMAGE002
Wherein V represents a set of points and wherein,lirepresenting a certain connection point in the set of points ViIn the grid cellxIn the position (a) of (b),
Figure DEST_PATH_IMAGE004
representing grid cellsxThe center position of (a);
s33: performing threshold value limitation on the calculated confidence of each grid unit to obtain a probability feature map P (x) And classifies whether or not there is a connection point in each grid cell,
Figure DEST_PATH_IMAGE006
s34: using the connection point offset profile O (x) Predicting the relative position of the connection point in the corresponding grid cell;
s35: the relative positions of the connection points in the corresponding grid cells are optimized using linear regression.
In the present application, step S3 further includes the following steps: precise relative position information of the connection points in the grid cells is obtained by non-maxima suppression techniques.
In the present application, step S4 includes:
s41: outputting endpoint coordinate information of a series of line segment samples by adopting a positive and negative sample mixed sampling mechanism;
s42: and performing fixed-length vectorization processing on each line sample according to the endpoint coordinate information of each line sample to obtain a characteristic vector of each line sample so as to extract a series of line samples.
In the present application, step S5 specifically includes: and filtering each line segment sample by using the intersection ratio between the areas of the rectangular frames formed by taking each line segment sample as an intersection line.
In the present application, step S5 specifically includes: and filtering each line segment sample by using the Euclidean distance between each line segment sample.
The automatic detection method for the number of the paper pulp cargos has the following advantages and beneficial effects:
after the pulp goods arrive at port, the pulp goods are loaded on the transport means, the video stream of the transport means loaded with the pulp goods is monitored in real time in a monitoring picture, video images are extracted from the real-time video stream, the part of the pulp goods is detected according to the video images, the characteristic that the pulp goods are bundled into one package is combined, and then line segment detection is carried out according to the detected part of the pulp goods, so that the detection of the quantity of the pulp goods is realized, the whole process does not need manual participation, the manual task quantity is reduced, the intellectualization and the automation are realized, the operation is carried out by means of a computer, the detection speed is high, and the detection efficiency is improved.
Other features and advantages of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of the method for automatically detecting the quantity of pulp cargo during loading and unloading at a dock according to the present invention;
FIG. 2 is an original drawing of a part of pulp cargo involved in an embodiment of an automatic detection method for the number of pulp cargo during loading and unloading at a wharf according to the present invention;
FIG. 3 is a diagram of candidate connection points obtained in an embodiment of the method for automatic detection of pulp cargo quantity for a loading and unloading process at a dock according to the present invention;
FIG. 4 is a diagram of a true candidate line segment sample obtained in an embodiment of the method for automatically detecting the quantity of pulp cargo during loading and unloading at a dock according to the present invention;
FIG. 5 is a final inspection diagram of the mapping of true candidate line segment samples onto partial original images of pulp cargos in an embodiment of the method for automatically detecting the quantity of pulp cargos in the loading and unloading process of a wharf, provided by the invention;
FIG. 6 is a decimated video image;
fig. 7 is a diagram showing the effect of the pulp cargo quantity automatic detection method applied to the video image in fig. 6 on the field detection of the pulp cargo during loading and unloading of the wharf.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
In order to avoid manually counting the number of paper pulp cargo packages in the loading and unloading process of a wharf, the application relates to an automatic detection method for the number of paper pulp cargo. The main task of the automatic detection method for the number of the pulp cargos is to detect the pulp cargo bags and calculate the number of the contained pulp cargos for all vehicles passing through the surveillance video.
The task is to realize the automatic detection of the quantity of the pulp goods on the transport means, and comprehensively use the target detection and the line segment detection.
Target detection has mainly two tasks: the classification and positioning of the target requires separating the foreground object from the background and determining the category and position information of the foreground.
Current target detection algorithms can be divided into candidate region-based and end-to-end based methods, with candidate region-based methods being more dominant in terms of detection accuracy and positioning accuracy. Meanwhile, as the paper pulp goods are placed irregularly, auxiliary information such as volume, midpoint and the like is difficult to obtain, and the difficulty is improved for further determining the quantity of the paper pulp goods.
The pulp goods have the characteristic of binding one bundle by one bundle, and the quantity information can be effectively extracted by adopting a line segment detection method in combination with the current line segment detection technology.
Referring to fig. 1 to 7, implementation of the automatic pulp goods quantity detection method will be described in detail as follows.
S1: the video stream of the vehicle loaded with pulp goods is extracted in real time at a certain frequency.
After arrival of the pulp goods, they are loaded onto a transport means, such as a transport truck, which is monitored by means of a monitoring picture, and a video image is captured by extracting a certain frequency of frames per second in a monitored video stream, see fig. 2.
It should be noted that each package of pulp goods loaded on the transport vehicle is substantially rectangular and has substantially uniform height and cross-sectional area, and is bound together into a package by straps (e.g., steel wires).
S2: and processing the extracted video image to segment a characteristic diagram of the pulp cargo part.
The task of step S2 is to achieve target detection, and is performed in a plurality of parts as follows.
And S21, inputting the video image into a residual error network to obtain a characteristic image.
In order to extract semantic features of a video image and improve the capability of acquiring detailed features of the image, the video image is input into a residual error network to acquire a feature map.
The residual error network is an asymmetric coding and decoding structure fused with expansion convolution.
For the convenience of subsequent processing, the input video images with different sizes are firstly unified into a square of 512 × 512, and then are sequentially input into a plurality of coding and decoding modules.
In the coding and decoding module, the coding part comprises 3 times of convolution operation with step size of 2, each size of convolution is followed by 6 residual connecting blocks, wherein the convolution kernel used in the residual connecting block is processed by the expansion rate of 2 so as to obtain a larger receptive field. The decoding section restores the image to the input size.
The asymmetric coding and decoding structure can fully extract details and fuse a larger receptive field, so that the residual error module can provide abundant detail characteristics for subsequent processing.
S22: and processing the characteristic image, and extracting a rectangular candidate frame for distinguishing the foreground from the background.
Firstly, generating 9 rectangular frames containing 3 shapes (the length-width ratio belongs to {1:1,1:2,2:1 }), traversing each point in the characteristic image, matching the 9 rectangular frames for each point, and judging the rectangular frame belonging to the foreground through a Softmax classifier to be a two-classification problem; and meanwhile, the coordinate information of the rectangular frame is corrected by using border regression to form a more accurate rectangular candidate frame.
Note that the foreground refers to the pulp cargo, and the background refers to a portion other than the pulp cargo portion.
S23: and mapping the extracted rectangular candidate frame into a feature image, and unifying the window size of the rectangular candidate frame by using a regional feature aggregation technology.
The region feature aggregation technique is proposed when used in Mask RCNN to generate a fixed-size feature map from the generated candidate frame region pro-positive map, which is a technique commonly used in the existing example segmentation architecture.
S24: and segmenting the characteristic image into a characteristic image of the pulp cargo part according to the coordinate information of the rectangular candidate frame.
And finding out the rectangular candidate frames belonging to the foreground according to the coordinate information of each rectangular candidate frame, thereby segmenting the characteristic diagram of the pulp cargo part.
In order to ensure the accuracy of the rectangular candidate frame and improve the automatic detection accuracy of the number of the paper pulp cargos, before the feature map of the paper pulp cargo part is segmented, the rectangular candidate frame obtained in the step S23 is subjected to sequential border regression, the coordinate information of the rectangular candidate frame is corrected, and the accuracy of the coordinate information is realized.
And according to the rectangular candidate frame with accurate coordinate information, the characteristic image is divided into the characteristic map of the pulp cargo part according to the coordinate information of the rectangular candidate frame.
And (5) segmenting a characteristic diagram of the pulp cargo part, namely completing target detection.
Then, the line segment detection is needed to be carried out on the characteristic diagram of the pulp cargo part so as to detect the quantity of the pulp cargo in the video image.
The specific line segment detection section is described below with reference to fig. 1 to 7.
S3: and processing the characteristic graph to obtain candidate connection points.
The characteristic diagram here refers to the characteristic diagram of the pulp cargo portion divided in S24.
S31: the feature map is subjected to grid division to formMA grid cellWx×Hx
Performing gridding treatment on the feature mapW×HIs divided intoMA grid cell having a grid area ofWx×HxWhere V represents a set of points.
In a certain grid cellxIn the method, it is required to predict whether a candidate connection point exists, and if a connection point exists, predict that the connection point exists in the grid cellxRelative position in (2).
S32: and sequentially inputting the feature maps into a convolutional layer and a classification layer for processing so as to calculate the confidence coefficient of each grid unit, and converting the feature maps processed by the convolutional layer into a connection point offset feature map O (x).
Specifically, the feature map is processed using a network comprising 1 × 1 convolutional layers and classification layers in which confidence is calculated by a softmax classification function as to whether there is a connection point in each mesh cell.
And converting the characteristic diagram into a characteristic diagram O (with offset of connection points) by using a network containing 1 × 1 convolution layersx) The following are:
Figure 154110DEST_PATH_IMAGE002
(1)。
wherein the content of the first and second substances,lirepresenting a point of connection in a set of points ViIn the grid cellxIn the position (a) of (b),
Figure 805671DEST_PATH_IMAGE004
representing grid cellsxThe center position of (a).
S33: performing threshold value limitation on the calculated confidence of each grid unit to obtain a probability feature map P (x) And classifies whether or not there is a connection point in each grid cell.
Probability feature map P (x) The following were used:
Figure 116567DEST_PATH_IMAGE006
(2)。
that is, whether a connection point exists in a grid cell is a two-classification problem.
Limiting the calculated confidence of each grid unit by a threshold value p if the grid unit is usedxIs greater than the threshold, according to formula (3), P: (x) =1, consider the grid cellxThere is a connection point in otherwise P: (x) =0, consider the grid cellxThere are no connection points.
If the presence of a connection point in a grid cell is predicted, the relative location of the connection point in the grid cell continues to be predicted (this section is described in detail below).
S34: using the connection point offset profile O (x) The relative position of the connection point in the corresponding grid cell is predicted.
O(x) Arranged as the center point and connection point of the grid celliFor predicting the connection pointiIn the grid cellxRelative position in (2).
S35: the relative positions of the connection points in the corresponding grid cells are optimized using linear regression.
If the grid cell x contains the connection point i, the relative position of the connection point is optimized by adopting an L2 linear regression, wherein an objective function of the L2 linear regression is as follows:
Figure DEST_PATH_IMAGE008
(3)。
whereinNvIndicating the number of connection points.
Furthermore, employing non-maxima suppression techniques further eliminates "non-connection points" in each network element, i.e., obtains more accurate relative location information of connection points in the network element.
Particularly in the procedure, may be implemented by maximally pooling operations for obtaining more accurate relative position information of connection points in the network element.
After the processing of S31 to S35 as described above, the relative positional relationships of the K candidate connection points with the highest confidence are finally output
Figure DEST_PATH_IMAGE010
Refer to fig. 3.
It should be noted that before step S3 is actually executed, the entire model in S3 may need to be trained by using the cross entropy loss function, the trained model may be directly used in actual use, and when the feature map is used as an input, the output obtains the candidate connection points as described above.
S4: a series of candidate line segment samples are extracted based on the candidate connection points acquired in S3.
The purpose of this step is to select K candidate connection points according to the obtained K candidate connection points in S3
Figure DEST_PATH_IMAGE011
Obtaining T candidate line segment samples
Figure DEST_PATH_IMAGE013
Wherein
Figure DEST_PATH_IMAGE015
And
Figure DEST_PATH_IMAGE017
is shown aszThe coordinates of the endpoints of the sample of candidate line segments.
S41: and acquiring the endpoint coordinate information of the T candidate line segment samples by adopting a positive and negative sample mixed sampling mechanism.
It should be noted that the mixed sampling of positive and negative samples is a preparation work for model training, in the training process, the difference between the number of positive and negative samples of the K candidate connection points is large, the number of positive and negative samples needs to be balanced, and a mixed training mode of positive and negative samples is adopted, wherein the positive samples come from the labeled true line segment samples, and the negative samples are unreal line segment samples generated randomly through heuristic learning.
When there are few accurate positive samples or training is saturated in the extracted K candidate connection points, quantitative positive/negative samples are added to help start training. Moreover, the added positive samples help the prediction points to adjust the positions, and the prediction performance is improved.
S41: and performing fixed-length vectorization processing on each line sample according to the endpoint coordinate information of each line sample to obtain a feature vector of each line sample so as to extract a series of candidate line samples.
Based on some sample of candidate line segments, e.g. ofzTwo endpoint coordinates of candidate line segment sample
Figure 701657DEST_PATH_IMAGE015
And
Figure 312766DEST_PATH_IMAGE017
vectorization processing of fixed length for line segment sample, i.e. calculating by two end point coordinatesN l Uniformly distributing the points, and obtaining the coordinates of the intermediate points by bilinear interpolation on the feature map output in step S2:
Figure DEST_PATH_IMAGE019
(5)。
thus, a feature vector q of line segment samples is extracted, which isC×N l WhereinCThe number of channels of the feature map output in step S2.
At this time, the line segment candidate samples are extracted.
It should be noted that before a series of candidate line segment samples are obtained through the endpoint coordinate information of the line segment samples, model training is required.
In the training process, the feature map output in step S2 and the candidate segment sample output in step S4 are required as input.
The training process is briefly described as follows:
first, based on a sample of candidate line segments, e.g. the firstzTwo endpoint coordinates of candidate line segment sample
Figure 197546DEST_PATH_IMAGE015
And
Figure 199000DEST_PATH_IMAGE017
vectorization processing of fixed length for line segment sample, i.e. calculating by two end point coordinatesN l Uniformly distributing the points, and obtaining the coordinates of the intermediate points by bilinear interpolation on the feature map output in step S2:
Figure 929059DEST_PATH_IMAGE019
then, the eigenvector q is reduced in dimension by one-dimensional maximum pooling of step sizes s, becomingC×N l And/s and is expanded into a one-dimensional feature vector.
Inputting the one-dimensional feature vector into a full-connection layer for convolution processing to obtain a logic value, specifically, after performing full-connection convolution twice on the one-dimensional feature vector, taking a log value and returning the log value
Figure DEST_PATH_IMAGE021
Figure 925833DEST_PATH_IMAGE021
And true valueySigmoid loss calculation and model optimization are carried out together to improve the prediction accuracy, wherein the loss function is as follows:
Figure DEST_PATH_IMAGE023
(6)。
wherein the true valueyThat is, the feature value in the feature map output in S2 is convolved with the true label of the line segment sample, and the value is returned after log extraction.
The penalty is the log of the calculated prediction (i.e., the
Figure 184776DEST_PATH_IMAGE021
) And the error between the log values (namely y) corresponding to the real labels of the line segment samples is used for model training and optimization.
Since a phenomenon of repeated detection, that is, a situation where two line segment samples overlap each other, inevitably occurs during detection, it is necessary to filter the line segment samples output in S4 in order to improve the detection accuracy of the line segment samples.
The filtering method is implemented in step S5, that is, each line segment sample is filtered by using an Intersection-over-unity (IoU) ratio between areas of rectangular frames formed by each line segment sample as an Intersection, or each line segment sample is filtered by using a euclidean distance between line segment samples.
The filtering method is not limited herein as long as the purpose of filtering can be achieved.
The line segment samples are filtered by IoU for example.
Suppose that the coordinates of two coincident line segment samples are L respectively1[(x 11,y 11),(x 12,y 12)]And L2[(x 21,y 21),(x 22, y 22)]。
L1Length H of rectangular frame R1 formed1Width W of1And area A thereof1And L is2Length H of rectangular frame R2 formed2Width W of2And area A thereof2Respectively expressed as:
Figure DEST_PATH_IMAGE025
the length H, width H and area A of the rectangular box where the two rectangular boxes R1 and R2 intersect are respectively expressed as:
Figure DEST_PATH_IMAGE027
if it is not
Figure DEST_PATH_IMAGE029
Or
Figure DEST_PATH_IMAGE031
IoU = 0. Otherwise, IoU is calculated using the following formula:
Figure DEST_PATH_IMAGE033
IoU are thresholded, IoU adjusted thresholds are filtered, and finally filtered line segment samples are output, see fig. 4.
S6: and counting the number of the filtered line segment samples to obtain the number of the pulp cargos.
The number of line segment samples after filtering is the number of pulp picks.
Fig. 5 is a final detection diagram of the line segment candidate samples in fig. 4 mapped onto the original paper pulp goods image, and it can be seen from the detection effect that the automatic detection of the number of the paper pulp goods is accurate.
Referring to fig. 6 and 7, fig. 6 is a truncated original video image; fig. 7 is a diagram showing the effect of the pulp cargo field inspection applied to the video image in fig. 6 by using the automatic pulp cargo quantity detection method proposed in the present application, which outputs the quantity of line segment samples, i.e., the pulp cargo quantity.
As can be seen from the detection effect graphs of FIG. 5 and FIG. 7, the automatic detection method for the quantity of the paper pulp cargos, which is provided by the application, has high detection accuracy.
The automatic detection method for the quantity of the paper pulp cargos directly extracts the video images from the video stream for detection, is completely automatic, does not need manual participation, reduces the manual task amount, is intelligent in computer calculation, is high in detection speed, and improves the detection efficiency.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (8)

1. A method for automatically detecting the quantity of pulp cargos in the loading and unloading process of a wharf is characterized by comprising the following steps:
s1: extracting a video stream of a transport vehicle loaded with pulp goods in real time at a certain frequency;
s2: processing the extracted video image to segment a characteristic diagram of the paper pulp cargo part;
s3: processing the characteristic diagram to obtain candidate connection points;
s4: extracting a series of candidate line segment samples according to the candidate connecting points;
s5: filtering the candidate line segment samples;
s6: and counting the number of the candidate line segment samples to obtain the number of the pulp cargos.
2. The automatic pulp goods quantity detection method according to claim 1, wherein the step S2 includes the following steps:
s21: inputting the extracted video image into a residual error network to obtain a characteristic image;
s22: processing the characteristic image, and extracting a rectangular candidate frame for distinguishing a foreground from a background;
s23: mapping the extracted rectangular candidate frame into the feature image, and unifying the window size of the rectangular candidate frame by using a regional feature aggregation technology;
s24: and segmenting the characteristic image into a characteristic map of the pulp cargo part according to the coordinate information of the rectangular candidate frame.
3. The automatic pulp cargo quantity detection method according to claim 2, wherein said S2 further comprises the following steps after S23 and before S24:
s23': and performing boundary frame regression on the rectangular candidate frame after the unified window in the step S23 to correct the coordinate information of the rectangular candidate frame.
4. The automatic pulp goods quantity detection method according to claim 1, wherein the step S3 includes the following steps:
s31: carrying out mesh division on the feature map to form M mesh units Wx×Hx
S32: sequentially inputting the feature maps into a convolutional layer and a classification layer for processing to calculate the confidence of each grid unit, and converting the feature maps processed by the convolutional layer into a connection point offset feature map O (in the following steps of (1))x),
Figure DEST_PATH_IMAGE001
Wherein V represents a set of points and wherein,lirepresenting a point of connection in a set of points ViIn the grid cellxIn the position (a) of (b),
Figure 71247DEST_PATH_IMAGE002
representing grid cellsxThe center position of (a);
s33: performing threshold value limitation on the calculated confidence of each grid unit to obtain a probability feature map P (x) And classifies whether or not there is a connection point in each grid cell,
Figure DEST_PATH_IMAGE003
s34: using the connection point offset profile O (x) Predicting the relative position of the connection point in the corresponding grid cell;
s35: the relative positions of the connection points in the corresponding grid cells are optimized using linear regression.
5. The method for automatically detecting the pulp cargo quantity according to claim 4, wherein the step S3 further comprises the steps of:
precise relative position information of the connection points in the grid cells is obtained by non-maxima suppression techniques.
6. The automatic pulp goods quantity detection method according to claim 1, wherein step S4 includes:
s41: outputting endpoint coordinate information of a series of line segment samples by adopting a positive and negative sample mixed sampling mechanism;
s42: and performing fixed-length vectorization processing on each line sample according to the endpoint coordinate information of each line sample to obtain a characteristic vector of each line sample so as to extract a series of line samples.
7. The method for automatically detecting the quantity of pulp goods according to claim 1, wherein the step S5 is specifically as follows:
and filtering each line segment sample by using the intersection ratio between the areas of the rectangular frames formed by taking each line segment sample as an intersection line.
8. The method for detecting the quantity of pulp commodities as claimed in claim 1, wherein step S5 is specifically:
and filtering each line segment sample by using the Euclidean distance between each line segment sample.
CN202011590292.0A 2020-12-29 2020-12-29 Automatic detection method for pulp cargo quantity in wharf loading and unloading process Active CN112598660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011590292.0A CN112598660B (en) 2020-12-29 2020-12-29 Automatic detection method for pulp cargo quantity in wharf loading and unloading process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011590292.0A CN112598660B (en) 2020-12-29 2020-12-29 Automatic detection method for pulp cargo quantity in wharf loading and unloading process

Publications (2)

Publication Number Publication Date
CN112598660A true CN112598660A (en) 2021-04-02
CN112598660B CN112598660B (en) 2022-10-21

Family

ID=75203295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011590292.0A Active CN112598660B (en) 2020-12-29 2020-12-29 Automatic detection method for pulp cargo quantity in wharf loading and unloading process

Country Status (1)

Country Link
CN (1) CN112598660B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610465A (en) * 2021-08-03 2021-11-05 宁波极望信息科技有限公司 Production and manufacturing operation management system based on Internet of things technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427275A (en) * 2015-10-29 2016-03-23 中国农业大学 Filed environment wheat head counting method and device
CN109850540A (en) * 2019-04-03 2019-06-07 合肥泰禾光电科技股份有限公司 Apparatus for grouping and group technology
CN110047215A (en) * 2019-05-24 2019-07-23 厦门莒光科技有限公司 A kind of article intellectual access storage appts and method
CN110930087A (en) * 2019-09-29 2020-03-27 杭州惠合信息科技有限公司 Inventory checking method and device
CN111754450A (en) * 2019-07-16 2020-10-09 北京京东乾石科技有限公司 Method, device, equipment and computer readable medium for determining number of objects
CN112101389A (en) * 2020-11-17 2020-12-18 支付宝(杭州)信息技术有限公司 Method and device for measuring warehoused goods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427275A (en) * 2015-10-29 2016-03-23 中国农业大学 Filed environment wheat head counting method and device
CN109850540A (en) * 2019-04-03 2019-06-07 合肥泰禾光电科技股份有限公司 Apparatus for grouping and group technology
CN110047215A (en) * 2019-05-24 2019-07-23 厦门莒光科技有限公司 A kind of article intellectual access storage appts and method
CN111754450A (en) * 2019-07-16 2020-10-09 北京京东乾石科技有限公司 Method, device, equipment and computer readable medium for determining number of objects
CN110930087A (en) * 2019-09-29 2020-03-27 杭州惠合信息科技有限公司 Inventory checking method and device
CN112101389A (en) * 2020-11-17 2020-12-18 支付宝(杭州)信息技术有限公司 Method and device for measuring warehoused goods

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610465A (en) * 2021-08-03 2021-11-05 宁波极望信息科技有限公司 Production and manufacturing operation management system based on Internet of things technology
CN113610465B (en) * 2021-08-03 2023-12-19 宁波极望信息科技有限公司 Production manufacturing operation management system based on internet of things technology

Also Published As

Publication number Publication date
CN112598660B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN110796048B (en) Ship target real-time detection method based on deep neural network
CN109829445B (en) Vehicle detection method in video stream
CN111814755A (en) Multi-frame image pedestrian detection method and device for night motion scene
CN111008961A (en) Transmission line equipment defect detection method and system, equipment and medium thereof
CN111027539A (en) License plate character segmentation method based on spatial position information
CN114708295B (en) Logistics parcel separation method based on Transformer
CN110728640A (en) Double-channel single-image fine rain removing method
CN115147418B (en) Compression training method and device for defect detection model
CN114581782A (en) Fine defect detection method based on coarse-to-fine detection strategy
CN112598660B (en) Automatic detection method for pulp cargo quantity in wharf loading and unloading process
CN113111875A (en) Seamless steel rail weld defect identification device and method based on deep learning
CN114724063B (en) Road traffic incident detection method based on deep learning
CN115578616A (en) Training method, segmentation method and device of multi-scale object instance segmentation model
CN112258038A (en) Method, device and equipment for identifying platform use state and vehicle loading and unloading state
CN115457415A (en) Target detection method and device based on YOLO-X model, electronic equipment and storage medium
CN111009136A (en) Method, device and system for detecting vehicles with abnormal running speed on highway
CN111814739B (en) Method, device, equipment and storage medium for detecting express package volume
CN113869433A (en) Deep learning method for rapidly detecting and classifying concrete damage
CN117132872A (en) Intelligent collision recognition system for material transport vehicle on production line
CN111242051A (en) Vehicle identification optimization method and device and storage medium
CN114140400B (en) Method for detecting cigarette packet label defect based on RANSAC and CNN algorithm
CN116403200A (en) License plate real-time identification system based on hardware acceleration
Chan et al. Raw camera data object detectors: an optimisation for automotive processing and transmission
CN114399657A (en) Vehicle detection model training method and device, vehicle detection method and electronic equipment
CN113160217A (en) Method, device and equipment for detecting foreign matters in circuit and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.7 Ganghua Road, Shibei District, Qingdao, Shandong Province

Applicant after: Shandong Port Technology Group Qingdao Co., Ltd.

Applicant after: QINGDAO OCEAN SHIPPING TALLY Co.,Ltd.

Applicant after: QINGDAO University

Address before: No.7 Ganghua Road, Shibei District, Qingdao, Shandong Province

Applicant before: PORT OF QINGDAO TECHNOLOGY Co.,Ltd.

Applicant before: QINGDAO OCEAN SHIPPING TALLY Co.,Ltd.

Applicant before: QINGDAO University

GR01 Patent grant
GR01 Patent grant