CN109800735A - Accurate detection and segmentation method for ship target - Google Patents
Accurate detection and segmentation method for ship target Download PDFInfo
- Publication number
- CN109800735A CN109800735A CN201910094015.1A CN201910094015A CN109800735A CN 109800735 A CN109800735 A CN 109800735A CN 201910094015 A CN201910094015 A CN 201910094015A CN 109800735 A CN109800735 A CN 109800735A
- Authority
- CN
- China
- Prior art keywords
- target
- ship target
- sample
- value
- ship
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the field of machine vision and image processing, and relates to a ship target accurate detection and segmentation method. The method comprises the following steps: (S1) training a deep convolutional neural network model; (S2) acquiring the image to be detected and segmented, inputting the deep convolutional neural network model in the step (S1), and outputting a ship target result. The method can effectively realize accurate detection and segmentation of the ship targets, and has higher detection accuracy on dense targets, side-by-side targets and shore-bound targets. The method adopts a rotating frame prediction method, so that the candidate area and the truth value frame have higher intersection and parallel ratio, the confidence coefficient, the position and the segmentation mask of the target are output in parallel by setting three independent loss layers, and the network training is carried out by aiming at the training data amplification, so that the robustness of the model is improved.
Description
Technical field
The invention belongs to machine vision and field of image processing, are related to the design and training of deep learning model, realize
A kind of ship target under complex background accurately detects and dividing method.
Background technique
For ship target as water transportation carrier and target of military importance, the realistic meaning for accurately detecting and dividing is very heavy
Greatly.Such as ship searching rescue, entry and exit ship monitor, the control of ship illegal dumping pollutant, shipping vessels management, all to ship
The accurate detection of target has high requirement as convolutional neural networks and depth learning technology constantly improve, especially multiple
Target detection and identification field, has accumulated lot of documents and technical experience under miscellaneous background.Have benefited from its powerful feature extraction and
Learning ability, depth convolutional neural networks can carry out feature extraction to complicated image and carry out the feature of stratification to target
It indicates, the structure invariance based on target can be to deformation target, shelter target, fuzzy object, multiscale target and complexity
Target under background has good detectability.
The ship detection in the image of depression angle for being applied to battle reconnaissance and monitoring, especially complicated ground background
In ship detection problem under interference, still there are many problems demands to solve:
(1) ground targets such as trestle, harbour, loading/unloading platform, dolphin, container, working shed are on the image of depression angle
Existing posture and ship target are quite similar, easily generation erroneous detection, generate interference to testing result.
(2) for some intensive targets: such as more ships are side by side or the case where join end to end, in high-resolution figure
It is easy to be mistakenly detected as the same target as in.
How effectively to exclude it is land with littoral jamming target influence, and improve side by side with the detection of intensive target and point
It cuts the result is that realizing that ship target accurately detects the key with segmentation.
Summary of the invention
In order to solve the above technical problems, the present invention obtains the target inspection based on depth convolutional neural networks by training
Survey and parted pattern, realize the confidence level of multiple target, the parallel output of detection block and image masks.Specific technical solution is as follows:
A kind of ship target accurately detects and dividing method, comprising the following steps:
(S1) training depth convolutional neural networks model, specifically,
(S11) acquisition forms sample graph image set comprising the sample image of ship target, and pre-processes to sample graph image set;
(S12) the ship target in sample image is manually marked;
(S13) sample image input depth convolutional neural networks are subjected to feature extraction, export characteristic pattern;The characteristic pattern
For the output result of depth convolutional neural networks the last layer;
(S14) multiple rotating frames are preset, preset rotating frame is slided on each pixel of characteristic pattern, are extracted every
Pixel in a rotating frame exports the feature vector of identical dimensional by rotation pyramid pond method;By each feature to
Amount inputs full articulamentum, and exports the probability value containing target in each rotating frame, chooses the rotating frame that probability value is more than threshold value
As area-of-interest;
(S15) it according to artificial mark situation, chooses the rotating frame of area-of-interest and object matching degree greater than α and is positive sample
This, it is negative sample that object matching degree, which is rotating frame less than β, and β, α are constant and 0 < β < α < 1;With the positive sample of selection and
Negative sample is trained depth convolutional neural networks model;
(S16) feature vector is inputted to three full articulamentums of depth convolutional neural networks respectively, calculates confidence level loss
Value, position penalty values and segmentation mask penalty values, and three penalty values are added to obtain total losses value,
If total losses value reaches the numberical range of setting, terminate model training, is transferred to step (S2);Otherwise, sample is expanded
This image, return step (S11) continue to execute model training.
(S2) to be detected and segmented image is obtained, the depth convolutional neural networks model in input step (S1) exports ship
Objective result.
Further, sample graph image set is pre-processed in the step (S11) specifically:
(S111) the concentration value of ship target in every sample image is calculated, the concentration value is for target and recently
The minimum distance of pixel between adjacent target;
(S112) search sample image concentrates the concentration value of all sample images to be less than its rotation encirclement frame short side long
The ship target pair of degree;
(S113) ship target obtained in step (S112) is to doing slicing treatment, and is scaled at random, cuts out at random, water
It is flat to overturn, after greyscale transformation, saturation degree change process, updates to sample image and concentrate.
Further, the ship target detailed process in sample image is manually marked in the step (S12) are as follows: to sample graph
Ship target as in carries out the artificial mark of Pixel-level, if carrying out multiple marks there are multiple ship targets in single width sample image
Note generates minimum external square according to the mark exposure mask of each ship target.
Further, multiple rotating frame detailed processes are preset in the step (S14) are as follows: pre-set rotating frame
Total number is 12, be divided into 4 shapes it is identical, towards different composite structures, each composite structure square different by 3 scales
Shape frame composition, the long side of the different rectangle frame of 3 scales is parallel to each other, center is overlapped;The center of 4 composite structures
It is overlapped;
Further, the feature vector of identical dimensional is exported in the step (S14) by rotation pyramid pond method
Detailed process are as follows:
(S141) each rotating frame is divided into the identical grid of several areas respectively;
(S142) each grid is divided into four lattices of area identical 2 × 2, finds out the central point of each lattice
And its value apart from four nearest pixels, the pixel value of the central point is obtained using bilinear interpolation;
(S143) maximum value for seeking four lattice central point pixel values in each grid respectively, the pond as the grid
Change value;Splice the pond value of all grids to obtain feature vector;By step (S141)-(S143) processing, each rotation
Frame correspondence obtains a feature vector.
Further, in the step (S15) area-of-interest and object matching degree IoU calculation formula are as follows:
siThe pixel of area-of-interest is represented,Represent the pixel of artificial mark ship target.
Further, the step (S111) calculates the calculating of the minimum distance of pixel between target and arest neighbors target
Formula are as follows:
Wherein Distance (A, B) indicate target and arest neighbors target minimum distance, A, B represent two distance recently
Ship target, (xA, yA), (xB, yB) it is target A, the coordinate for each pixel for including in target B.
Further, the value of described α, β are α=0.6, β=0.2.
It has the beneficial effect that the accurate detection of the invention that can effectively realize for ship target using what the present invention obtained and divides
It cuts, to intensive target, side by side target and faces bank target and have higher accuracy in detection.Present invention employs rotating frame prediction sides
Method passes through three independent loss layer parallel output targets of setting so that there is higher friendship in candidate region with true value frame and compares
Confidence level, position and segmentation mask, are expanded to carry out network training by targetedly training data, improve the robust of model
Property.
Detailed description of the invention
Fig. 1 is step figure of the invention;
Fig. 2 is algorithm flow schematic diagram of the invention;
Fig. 3 is the candidate frame prediction module schematic diagram based on rotational slide window;
Fig. 4 is sliding window selection of dimension schematic diagram;
Fig. 5 is rotation pyramid pond schematic diagram;
Fig. 6 is the experimental result picture in specific embodiment, wherein figure (a), (c) are original image, figure (b) is that figure (a) is right
The output detection answered and segmentation result figure, figure (d) are the corresponding output detection of figure (c) and segmentation result figure.
Specific embodiment
Present invention will be further explained below with reference to the attached drawings and examples.
It is as shown in Figure 1 the method for the present invention flow chart.A kind of ship target accurately detects and dividing method, including following step
It is rapid:
(S1) training depth convolutional neural networks model, specifically,
(S11) acquisition forms sample graph image set comprising the sample image of ship target, and pre-processes to sample graph image set;
(S12) the ship target in sample image is manually marked;
(S13) sample image input depth convolutional neural networks are subjected to feature extraction, export characteristic pattern;The characteristic pattern
For the output result of depth convolutional neural networks the last layer;
In embodiment, characteristic pattern extraction is carried out using ResNet-50.(bibliography: K.He, X.Zhang, S.Ren and
J.Sun.Deep residual learning for image recognition[C].29th IEEE Conference on
2016,2016 770-778. of Computer Vision and Pattern Recognition, CVPR)
(S14) multiple rotating frames are preset, preset rotating frame is slided on each pixel of characteristic pattern, are extracted every
Pixel in a rotating frame exports the feature vector of identical dimensional by rotation pyramid pond method;By each feature to
Amount inputs full articulamentum, and exports the probability value containing target in each rotating frame, chooses the rotating frame that probability value is more than threshold value
As area-of-interest;As shown in Figure 3.
As shown in figure 4, in a particular embodiment, preset four direction, totally 12 rotating frames of three scales are predicted
Ship target position.Preferably, rotation angle is 0 °, 45 °, 90 °, 135 °.Three scale sizes are 16,32,48 pictures of long side length
Vegetarian refreshments, length-width ratio 5: 1.
If Fig. 5 is rotation pyramid pond method schematic diagram.Identical dimensional is exported by rotation pyramid pond method
Feature vector detailed process are as follows:
(S141) each rotating frame is divided into the identical grid of several areas respectively;
(S142) each grid is divided into four lattices of area identical 2 × 2, finds out the central point of each lattice
And its value apart from four nearest pixels, the pixel value of the central point is obtained using bilinear interpolation;
(S143) maximum value for seeking four lattice central point pixel values in each grid respectively, the pond as the grid
Change value;Splice the pond value of all grids to obtain feature vector;By step (S141)-(S143) processing, each rotation
Frame correspondence obtains the feature vector of an equal length;
(S15) it according to artificial mark situation, chooses the rotating frame of area-of-interest and object matching degree greater than α and is positive sample
This, it is negative sample that object matching degree, which is rotating frame less than β, and β, α are constant and 0 < β < α < 1;With the positive sample of selection and
Negative sample is trained depth convolutional neural networks model;
In embodiment, because including large amount of complex background in training sample, wherein positive and negative sample number is set as 1 for setting:
3, remainder is filled up by negative sample if positive sample quantity is still insufficient.Non-maxima suppression can also be carried out to positive sample
(Non-Maximum Suppression), screening removal is a large amount of to repeat area-of-interest, therefrom higher 256 of preferred score
Area-of-interest.
(S16) feature vector is inputted to three full articulamentums of depth convolutional neural networks respectively, calculates confidence level loss
Value, position penalty values and segmentation mask penalty values, and three penalty values are added to obtain total losses value,
If total losses value reaches the numberical range of setting, terminate model training, is transferred to step (S2);Otherwise, sample is expanded
This image, return step (S11) continue to execute model training.
Area-of-interest is inputted to three full articulamentums being independently arranged, the confidence level of area-of-interest is lost respectively
Lcls, position lose LboxL is lost with segmentation maskmaskCarry out calculate, these three parts are added to obtain total loss function L:
L=Lcls+Lbox+Lmask(formula two)
(1) it calculates confidence level and loses LclsSelect log-likelihood loss function:
piIndicate the probability that the area-of-interest that i-th exports is determined as to target, Boolean yiIndicate i-th of output sense
Interest region whether there is target, is 1 there are value, is otherwise 0.
(2) L is lost in calculating positionbox, it is calculated using smooth 1- norm:
Wherein, p indicates the match condition of area-of-interest and artificial callout box, i.e. pij={ 1,0 } indicates i-th of output
Whether area-of-interest matches with j-th of artificial callout box;Prediction resultIt is that m takes x respectivelyc, yc, h, w, when theta is each
The deviation of component, wherein xcIndicate the abscissa of output posting central point, ycIndicate that the ordinate of frame central point, w are frame
Width, h are the height of frame, and theta is the angle of frame.It is i-th of predefined frameWith j-th of true frameEach point
The deviation of amount, m take x respectivelyc, yc, h, w, when theta, circular is as follows:
(3) it calculates segmentation mask and loses Lmask, is defined as: the mask branch output dimension of each area-of-interest is n2,
I.e. resolution ratio is the binary mask of n × n, that is, judges that the pixel belongs to target or background, is carried out to each pixel
Sigmoid is calculated, LmaskIt is then defined as average binary system and intersects entropy loss.Mask loss function LmaskOnly candidate region just
It is defined on sample, mask target is the mask of the intersection between candidate region and its corresponding true value frame.
In embodiment, stochastic gradient descent method is taken to carry out model training, model learning rate is set as 0.0001, iteration
0.00001 is fallen to after 20000 times, total losses value is stablized the training when 0.17 or so no longer declines and stopped.
(S2) to be detected and segmented image is obtained, the depth convolutional neural networks model in input step (S1) exports ship
Objective result.
As shown in fig. 6, (a), (c) are input original image, (b), (d) for using two embodiments of the method for the present invention
To export the image for being labelled with ship detection and segmentation result.From the point of view of embodiment result, the present invention can be realized effectively pair
In the accurate detection and segmentation of ship target, to intensive target, side by side target and faces bank target and have higher accuracy in detection.
It will be further noted that the invention is not limited to specific embodiments above, those skilled in the art can be
Any deformation or improvement are made in the scope of the claims.
Claims (8)
1. a kind of ship target accurately detects and dividing method, it is characterised in that the following steps are included:
(S1) training depth convolutional neural networks model, specifically,
(S11) acquisition forms sample graph image set comprising the sample image of ship target, and pre-processes to sample graph image set;
(S12) the ship target in sample image is manually marked;
(S13) sample image input depth convolutional neural networks are subjected to feature extraction, export characteristic pattern;The characteristic pattern is deep
Spend the output result of convolutional neural networks the last layer;
(S14) multiple rotating frames are preset, preset rotating frame is slided on each pixel of characteristic pattern, extracts each rotation
Turn the pixel in frame, the feature vector of identical dimensional is exported by rotation pyramid pond method;Each feature vector is defeated
Enter full articulamentum, and export the probability value containing target in each rotating frame, chooses the rotating frame conduct that probability value is more than threshold value
Area-of-interest;
(S15) according to artificial mark situation, choosing the rotating frame of area-of-interest and object matching degree greater than α is positive sample, mesh
It is negative sample that mark matching degree, which is rotating frame less than β, and β, α are constant and 0 < β < α < 1;With the positive sample and negative sample pair of selection
Depth convolutional neural networks model is trained;
(S16) feature vector is inputted to three full articulamentums of depth convolutional neural networks respectively, calculates confidence level penalty values, position
Penalty values and segmentation mask penalty values are set, and three penalty values are added to obtain total losses value,
If total losses value reaches the numberical range of setting, terminate model training, is transferred to step (S2);Otherwise, amplified sample figure
Picture, return step (S11) continue to execute model training;
(S2) to be detected and segmented image is obtained, the depth convolutional neural networks model in input step (S1) exports ship target
As a result.
2. a kind of ship target as described in claim 1 accurately detects and dividing method, which is characterized in that the step (S11)
In sample graph image set is pre-processed specifically:
(S111) the concentration value of ship target in every sample image is calculated, the concentration value is target and arest neighbors mesh
The minimum distance of pixel between mark;
(S112) search sample image concentrates the concentration value of all sample images to be less than the bond length that frame is surrounded in its rotation
Ship target pair;
(S113) ship target obtained in step (S112) is to doing slicing treatment, and is scaled at random, cuts out at random, level is turned over
Turn, after greyscale transformation, saturation degree change process, updates to sample image and concentrate.
3. a kind of ship target as described in claim 1 accurately detects and dividing method, which is characterized in that the step (S12)
In manually mark ship target detailed process in sample image are as follows: the artificial mark of Pixel-level is carried out to the ship target in sample image
Note generates if carrying out multiple marks there are multiple ship targets in single width sample image according to the mark exposure mask of each ship target
Minimum external square.
4. a kind of ship target as described in claim 1 accurately detects and dividing method, it is characterised in that in the step (S14)
Preset multiple rotating frame detailed processes are as follows: pre-set rotating frame total number is 12, is divided into that 4 shapes are identical, court
To different composite structures, each composite structure is made of 3 different rectangle frames of scale, the different rectangle of 3 scales
The long side of frame is parallel to each other, center is overlapped;The center of 4 composite structures is overlapped.
5. a kind of ship target as described in claim 1 accurately detects and dividing method, it is characterised in that in the step (S14)
The feature vector detailed process of identical dimensional is exported by rotation pyramid pond method are as follows:
(S141) each rotating frame is divided into the identical grid of several areas respectively;
(S142) each grid is divided into four lattices of area identical 2 × 2, find out each lattice central point and its
The value of four nearest pixels of distance, obtains the pixel value of the central point using bilinear interpolation;
(S143) maximum value for seeking four lattice central point pixel values in each grid respectively, the pond as the grid
Value;Splice the pond value of all grids to obtain feature vector;
By step (S141)-(S143) processing, each rotating frame correspondence obtains a feature vector.
6. a kind of ship target as described in claim 1 accurately detects and dividing method, it is characterised in that in the step (S15)
The calculation formula of area-of-interest and object matching degree IoU are as follows:
siThe pixel of area-of-interest is represented,Represent the pixel of artificial mark ship target.
7. a kind of ship target as claimed in claim 2 accurately detects and dividing method, which is characterized in that the step (S111)
Calculate the calculation formula of the minimum distance of pixel between target and arest neighbors target are as follows:
Wherein Distance (A, B) indicates minimum distance, and A, B represent two apart from nearest ship target, (xA, yA),(xB, yB) be
The coordinate for each pixel for including in target A, target B.
8. a kind of ship target as described in claim 1 accurately detects and dividing method, which is characterized in that the value of described α, β
For α=0.6, β=0.2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910094015.1A CN109800735A (en) | 2019-01-31 | 2019-01-31 | Accurate detection and segmentation method for ship target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910094015.1A CN109800735A (en) | 2019-01-31 | 2019-01-31 | Accurate detection and segmentation method for ship target |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109800735A true CN109800735A (en) | 2019-05-24 |
Family
ID=66560605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910094015.1A Pending CN109800735A (en) | 2019-01-31 | 2019-01-31 | Accurate detection and segmentation method for ship target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109800735A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188833A (en) * | 2019-06-04 | 2019-08-30 | 北京字节跳动网络技术有限公司 | Method and apparatus for training pattern |
CN110490203A (en) * | 2019-07-05 | 2019-11-22 | 平安科技(深圳)有限公司 | Image partition method and device, electronic equipment and computer readable storage medium |
CN110517262A (en) * | 2019-09-02 | 2019-11-29 | 上海联影医疗科技有限公司 | Object detection method, device, equipment and storage medium |
CN110569712A (en) * | 2019-07-19 | 2019-12-13 | 中国地质大学(武汉) | method for detecting long and narrow wall in plan image |
CN110689527A (en) * | 2019-09-18 | 2020-01-14 | 北京航空航天大学 | Method, device and equipment for detecting installation state of aircraft cable bracket |
CN110930420A (en) * | 2019-11-11 | 2020-03-27 | 中科智云科技有限公司 | Dense target background noise suppression method and device based on neural network |
CN111126587A (en) * | 2019-12-30 | 2020-05-08 | 上海安路信息科技有限公司 | AC-DC ratio circuit |
CN111160354A (en) * | 2019-12-30 | 2020-05-15 | 哈尔滨工程大学 | Ship image segmentation method based on joint image information under sea and sky background |
CN111462140A (en) * | 2020-04-30 | 2020-07-28 | 同济大学 | Real-time image instance segmentation method based on block splicing |
CN112365510A (en) * | 2020-11-12 | 2021-02-12 | Oppo(重庆)智能科技有限公司 | Image processing method, device, equipment and storage medium |
CN112446231A (en) * | 2019-08-27 | 2021-03-05 | 丰图科技(深圳)有限公司 | Pedestrian crossing detection method and device, computer equipment and storage medium |
CN112800918A (en) * | 2021-01-21 | 2021-05-14 | 北京首都机场航空安保有限公司 | Identity recognition method and device for illegal moving target |
CN112906502A (en) * | 2021-01-29 | 2021-06-04 | 北京百度网讯科技有限公司 | Training method, device and equipment of target detection model and storage medium |
CN112906689A (en) * | 2021-01-21 | 2021-06-04 | 南京航空航天大学 | Image detection method based on defect detection and segmentation depth convolution neural network |
WO2021204014A1 (en) * | 2020-11-12 | 2021-10-14 | 平安科技(深圳)有限公司 | Model training method and related apparatus |
CN113592915A (en) * | 2021-10-08 | 2021-11-02 | 湖南大学 | End-to-end rotating frame target searching method, system and computer readable storage medium |
CN114066900A (en) * | 2021-11-12 | 2022-02-18 | 北京百度网讯科技有限公司 | Image segmentation method and device, electronic equipment and storage medium |
WO2023098487A1 (en) * | 2021-11-30 | 2023-06-08 | 西门子股份公司 | Target detection method and apparatus, electronic device, and computer storage medium |
CN117689890A (en) * | 2024-01-09 | 2024-03-12 | 哈尔滨工程大学 | Semantic segmentation method, device and storage medium based on fine and fog scene |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127204A (en) * | 2016-06-30 | 2016-11-16 | 华南理工大学 | A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks |
CN108319949A (en) * | 2018-01-26 | 2018-07-24 | 中国电子科技集团公司第十五研究所 | Mostly towards Ship Target Detection and recognition methods in a kind of high-resolution remote sensing image |
JP6395672B2 (en) * | 2015-06-30 | 2018-09-26 | 三菱電機株式会社 | Radar equipment |
CN108647681A (en) * | 2018-05-08 | 2018-10-12 | 重庆邮电大学 | A kind of English text detection method with text orientation correction |
-
2019
- 2019-01-31 CN CN201910094015.1A patent/CN109800735A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6395672B2 (en) * | 2015-06-30 | 2018-09-26 | 三菱電機株式会社 | Radar equipment |
CN106127204A (en) * | 2016-06-30 | 2016-11-16 | 华南理工大学 | A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks |
CN108319949A (en) * | 2018-01-26 | 2018-07-24 | 中国电子科技集团公司第十五研究所 | Mostly towards Ship Target Detection and recognition methods in a kind of high-resolution remote sensing image |
CN108647681A (en) * | 2018-05-08 | 2018-10-12 | 重庆邮电大学 | A kind of English text detection method with text orientation correction |
Non-Patent Citations (4)
Title |
---|
JIANQI MA ET AL.: ""Arbitrary-Oriented Scene Text Detection via Rotation Proposals"", 《 IEEE TRANSACTIONS ON MULTIMEDIA》 * |
王志: ""基于深度学习的复杂背景下目标检测与分割方法"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
胡秀华: ""复杂场景中目标跟踪算法研究"", 《中国博士学位论文全文数据库 信息科技辑》 * |
衣世东: ""基于深度学习的图像识别算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188833A (en) * | 2019-06-04 | 2019-08-30 | 北京字节跳动网络技术有限公司 | Method and apparatus for training pattern |
CN110490203A (en) * | 2019-07-05 | 2019-11-22 | 平安科技(深圳)有限公司 | Image partition method and device, electronic equipment and computer readable storage medium |
CN110490203B (en) * | 2019-07-05 | 2023-11-03 | 平安科技(深圳)有限公司 | Image segmentation method and device, electronic equipment and computer readable storage medium |
CN110569712A (en) * | 2019-07-19 | 2019-12-13 | 中国地质大学(武汉) | method for detecting long and narrow wall in plan image |
CN112446231A (en) * | 2019-08-27 | 2021-03-05 | 丰图科技(深圳)有限公司 | Pedestrian crossing detection method and device, computer equipment and storage medium |
CN110517262A (en) * | 2019-09-02 | 2019-11-29 | 上海联影医疗科技有限公司 | Object detection method, device, equipment and storage medium |
CN110517262B (en) * | 2019-09-02 | 2022-08-16 | 上海联影医疗科技股份有限公司 | Target detection method, device, equipment and storage medium |
CN110689527A (en) * | 2019-09-18 | 2020-01-14 | 北京航空航天大学 | Method, device and equipment for detecting installation state of aircraft cable bracket |
CN110689527B (en) * | 2019-09-18 | 2021-08-24 | 北京航空航天大学 | Method, device and equipment for detecting installation state of aircraft cable bracket |
CN110930420A (en) * | 2019-11-11 | 2020-03-27 | 中科智云科技有限公司 | Dense target background noise suppression method and device based on neural network |
CN110930420B (en) * | 2019-11-11 | 2022-09-30 | 中科智云科技有限公司 | Dense target background noise suppression method and device based on neural network |
CN111160354A (en) * | 2019-12-30 | 2020-05-15 | 哈尔滨工程大学 | Ship image segmentation method based on joint image information under sea and sky background |
CN111160354B (en) * | 2019-12-30 | 2022-06-17 | 哈尔滨工程大学 | Ship image segmentation method based on joint image information under sea and sky background |
CN111126587A (en) * | 2019-12-30 | 2020-05-08 | 上海安路信息科技有限公司 | AC-DC ratio circuit |
CN111126587B (en) * | 2019-12-30 | 2021-02-02 | 上海安路信息科技有限公司 | AC-DC ratio circuit |
CN111462140B (en) * | 2020-04-30 | 2023-07-07 | 同济大学 | Real-time image instance segmentation method based on block stitching |
CN111462140A (en) * | 2020-04-30 | 2020-07-28 | 同济大学 | Real-time image instance segmentation method based on block splicing |
WO2021204014A1 (en) * | 2020-11-12 | 2021-10-14 | 平安科技(深圳)有限公司 | Model training method and related apparatus |
CN112365510A (en) * | 2020-11-12 | 2021-02-12 | Oppo(重庆)智能科技有限公司 | Image processing method, device, equipment and storage medium |
CN112906689B (en) * | 2021-01-21 | 2024-03-15 | 南京航空航天大学 | Image detection method based on defect detection and segmentation depth convolutional neural network |
CN112800918A (en) * | 2021-01-21 | 2021-05-14 | 北京首都机场航空安保有限公司 | Identity recognition method and device for illegal moving target |
CN112906689A (en) * | 2021-01-21 | 2021-06-04 | 南京航空航天大学 | Image detection method based on defect detection and segmentation depth convolution neural network |
CN112906502B (en) * | 2021-01-29 | 2023-08-01 | 北京百度网讯科技有限公司 | Training method, device, equipment and storage medium of target detection model |
CN112906502A (en) * | 2021-01-29 | 2021-06-04 | 北京百度网讯科技有限公司 | Training method, device and equipment of target detection model and storage medium |
CN113592915B (en) * | 2021-10-08 | 2021-12-14 | 湖南大学 | End-to-end rotating frame target searching method, system and computer readable storage medium |
CN113592915A (en) * | 2021-10-08 | 2021-11-02 | 湖南大学 | End-to-end rotating frame target searching method, system and computer readable storage medium |
CN114066900A (en) * | 2021-11-12 | 2022-02-18 | 北京百度网讯科技有限公司 | Image segmentation method and device, electronic equipment and storage medium |
WO2023098487A1 (en) * | 2021-11-30 | 2023-06-08 | 西门子股份公司 | Target detection method and apparatus, electronic device, and computer storage medium |
CN117689890A (en) * | 2024-01-09 | 2024-03-12 | 哈尔滨工程大学 | Semantic segmentation method, device and storage medium based on fine and fog scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109800735A (en) | Accurate detection and segmentation method for ship target | |
CN113569667B (en) | Inland ship target identification method and system based on lightweight neural network model | |
CN110232350B (en) | Real-time water surface multi-moving-object detection and tracking method based on online learning | |
CN109977918B (en) | Target detection positioning optimization method based on unsupervised domain adaptation | |
CN109784203B (en) | Method for inspecting contraband in weak supervision X-ray image based on layered propagation and activation | |
CN104392228B (en) | Unmanned plane image object class detection method based on conditional random field models | |
WO2020046213A1 (en) | A method and apparatus for training a neural network to identify cracks | |
CN109299688A (en) | Ship Detection based on deformable fast convolution neural network | |
CN108647655A (en) | Low latitude aerial images power line foreign matter detecting method based on light-duty convolutional neural networks | |
CN109543606A (en) | A kind of face identification method that attention mechanism is added | |
CN111079739B (en) | Multi-scale attention feature detection method | |
CN110991444B (en) | License plate recognition method and device for complex scene | |
CN107145903A (en) | A kind of Ship Types recognition methods extracted based on convolutional neural networks picture feature | |
CN111091095B (en) | Method for detecting ship target in remote sensing image | |
CN108021890B (en) | High-resolution remote sensing image port detection method based on PLSA and BOW | |
CN110647802A (en) | Remote sensing image ship target detection method based on deep learning | |
CN108520203A (en) | Multiple target feature extracting method based on fusion adaptive more external surrounding frames and cross pond feature | |
CN116563726A (en) | Remote sensing image ship target detection method based on convolutional neural network | |
Nguyen et al. | Satellite image classification using convolutional learning | |
CN107992818A (en) | A kind of detection method of remote sensing image sea ship target | |
CN109948457A (en) | The real time target recognitio algorithm accelerated based on convolutional neural networks and CUDA | |
CN112417931A (en) | Method for detecting and classifying water surface objects based on visual saliency | |
Zhang et al. | Nearshore vessel detection based on Scene-mask R-CNN in remote sensing image | |
Zhang et al. | Few-shot object detection with self-adaptive global similarity and two-way foreground stimulator in remote sensing images | |
Yaohua et al. | A SAR oil spill image recognition method based on densenet convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190524 |