CN109741345B - Automatic selection method of intelligent segmentation parameters for strengthening target attributes of specific area classes - Google Patents
Automatic selection method of intelligent segmentation parameters for strengthening target attributes of specific area classes Download PDFInfo
- Publication number
- CN109741345B CN109741345B CN201811633926.9A CN201811633926A CN109741345B CN 109741345 B CN109741345 B CN 109741345B CN 201811633926 A CN201811633926 A CN 201811633926A CN 109741345 B CN109741345 B CN 109741345B
- Authority
- CN
- China
- Prior art keywords
- membership
- segmentation
- region
- tuple
- tuples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to an automatic selection method of a middle intelligence segmentation parameter for strengthening the target attribute of a specific area class, which comprises the following steps: acquiring image data and selecting a specific area; setting different image segmentation parameter tuples, and calculating segmentation results based on the different parameter tuples; measuring intelligent membership, uncertainty and non-membership in a specific area class target based on a diamond area boundary under each segmentation result; measuring intelligent membership, uncertainty and non-membership in a specific area class target based on a square area boundary under each segmentation result; and calculating the mesopic similarity aiming at each segmentation result, and finally determining the segmentation parameter tuple suitable for the current image distribution. The method is simple to implement and wide in application range, and based on the characteristics of the current image, the method can automatically complete the selection of the segmentation parameters for strengthening the target attributes of the specific interest area class, serve the tasks such as follow-up target tracking and the like, and greatly improve the performance of the related tasks.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to an automatic selection method of a middle intelligence segmentation parameter for strengthening the target attribute of a specific area class.
Background
The image segmentation is a key technology in computer vision analysis, and mainly analyzes and determines an image segmentation area by comprehensively utilizing multiple theories such as pattern recognition, an optimization theory, a probability theory, a random process, machine learning and the like. Image segmentation is widely used for medical image focus extraction, object detection, scene understanding, visual target tracking and the like. The segmentation result of the existing image segmentation algorithm is usually closely related to parameter initialization, and different initialization parameters correspond to different segmentation results.
Based on the image segmentation result, the image segmentation result can provide basis for visual target detection by combining with the class target attribute, and mainly provides the probability that the specific region of the image region is the interest target region. The accuracy of the information mainly depends on the semantic level accuracy of image segmentation, excessive sub-segmentation or over-segmentation cannot occur, and otherwise, the judgment of the class target attribute is greatly interfered. The class target attribute can also bring beneficial effects to an online target tracking algorithm based on a classifier, and the class target attribute can be matched with a weight value for a training sample according to a class target scale, so that a target-level training sample is strengthened, and the training sample deviating from a target is weakened moderately, thereby greatly improving the training reliability of the classifier and improving the target tracking robustness. Since the accuracy of the class scale measurement depends on the segmentation result, in many applications, it is desirable to obtain segmentation parameters for enhancing the class scale of a specific region, but the maximization of the class scale only depends on the region is easy to introduce noise, resulting in unexpected consequences.
The mesointelligence theory is a generalized theory of the traditional fuzzy theory, not only focuses on the membership degree and the non-membership degree of an analysis object, but also introduces the uncertainty degree, and shows better uncertain information processing capacity compared with the traditional fuzzy theory. Through the development of the last decade, the middle intelligence theory system has been perfected and developed to a great extent. The Zhongzhi theory has been widely used in the visual analysis fields of image segmentation, compression, moving target extraction, feature fusion and the like, and simultaneously shows more obvious theoretical advantages in the aspects of steam turbine fault diagnosis, medical diagnosis, mechanical control and the like. Aiming at the problem of segmentation parameter optimization, how to introduce a mesopic theory, except for a specific region class target degree, a neighborhood uncertainty factor is introduced, and the image segmentation parameter selection reliability is comprehensively improved, so that the core problem of the method is formed.
Disclosure of Invention
The invention aims to provide an automatic selection method of the mesoscopic intelligent segmentation parameters for strengthening the target attributes of specific area classes, which is simple to realize, has strong anti-interference capability, and can better adapt to the image segmentation parameter selection task under the extremely challenging conditions of complicated and variable backgrounds, complicated image characteristics, uncertain scales and the like.
In order to achieve the above object, the present invention has the following configurations:
the invention provides an automatic selection method of a Zhongzhi segmentation parameter for strengthening the target attribute of a specific area class, which comprises the following steps:
(1) acquiring image data and selecting a specific area B to be enhanced;
(2) setting different image segmentation parameter tuples and calculating segmentation results based on the different parameter tuples on the image data;
(3) calculating the measurement of intelligent membership, uncertainty and non-membership in the specific area class target based on the diamond area boundary under each segmentation result;
(4) calculating the measurement of intelligent membership, uncertainty and non-membership in the specific area class target based on the square area boundary under each segmentation result;
(5) and (4) calculating the mesopic similarity between each image segmentation parameter tuple and the specific region B based on the calculation results of the step (3) and the step (4), and determining the segmentation parameter tuple suitable for the current image distribution.
Optionally, in the step (2), setting different image segmentation parameter tuples includes the following steps:
aiming at a segmentation algorithm, selection can obviously influence segmentationThe resulting and widely differing parameters form partitioned parameter tuples, each parameter configuration forming a tuple PkAnd configuring K tuples, and calculating to obtain corresponding K segmentation results based on the K tuples.
Optionally, in the step (3), the intelligent membership, uncertainty and non-membership measure in the specific area class target based on the diamond-shaped area boundary under each segmentation result is calculated according to the following formulas:
FDIA(Pk,B)=1-TDIA(Pk,B)
wherein, TDIA(Pk,B)、IDIA(PkB) and FDIA(PkB) each is a radical corresponding to PkMeasurement values of the intelligent membership degree, the uncertainty degree and the non-membership degree of the tuple and a specific area B based on the boundary condition of the diamond area, wherein the area B is a certain specific rectangular area on the image, sk(c) Are regions B and PkThe category scale of the tuple segmentation result, c is the central point of the area B; diarj(α) is a pixel point whose vertex is respectively positioned on the boundary of the diamond region with the distance of α pixel points from the positive left, the positive upper, the positive right and the positive lower of c, the diamond vertex at the positive left of c is taken as a starting point, a pixel point is extracted every β pixel points clockwise along the boundary of the diamond region, and the dira isj(α) is the j th extracted pixel point, N is the total number of extracted pixel points, sk(diarj(α)) means based on PkTuple segmentation result in diarj(α) a class scale corresponding to the rectangular region centered, which has the same width and height as region B.
Optionally, the step (3) further includes the steps of:
calculation of P-base according to the following formulakClass scale s corresponding to rectangular region R with x as center point as tuple division resultk(x):
Wherein S iskRepresentation based on PkSet of all the partitioned areas of tuple partitioning, siRepresents the ith division region, | siR is the number of image pixel points outside the R region in the ith division region, | si∩ R | is the number of image pixels in the I-th partition region falling in the R region, and | R | is the total number of pixels contained in R.
Optionally, in the step (4), the intelligent membership, uncertainty and non-membership measure in the specific area class target based on the square area boundary under each segmentation result is calculated according to the following formulas:
FSQ(Pk,B)=1-TSQ(Pk,B)
wherein, TSQ(Pk,B)、ISQ(PkB) and FSQ(PkB) each is a radical corresponding to PkMeasurement values of intelligent membership, uncertainty and non-membership of tuples and areas B based on the boundary condition of square areas, wherein the area B is a certain specific rectangular area on an image, sk(c) Are regions B and PkThe category scale of the tuple segmentation result, c is the central point of the area B; sqrj(α, gamma) is a pixel point on the square boundary with c as the central point and the side length of 2 α +1, a pixel point is extracted every gamma pixel points clockwise along the square region boundary by taking the top left vertex of the square as the starting point, and sqrj(α, gamma) is the j th extracted pixel point, M is the total number of extracted pixel points, sk(sqrj(α, γ)) means based on PkTuple segmentation result in sqrj(α, γ) is the class scale for a rectangular region centered on, which has the same width and height as region B.
Optionally, in the step (5), the mesopic similarity for each segmentation result is calculated according to the following formula:
wherein nll (P)kB) is corresponding to PkMesoscopic similarity, w, of tuples and regions BDIA∈[0,1],wSQ∈[0,1]And wDIA+wSQRespectively calculating the mesopic similarity corresponding to K tuples (1), wherein the mesopic similarity has the maximum value PkThe tuple is the selected segmentation parameter tuple;
wherein, TDIA(Pk,B)、IDIA(PkB) and FDIA(PkB) each is a radical corresponding to PkMeasuring values of the tuples and the specific area B based on the intelligent membership, uncertainty and non-membership of the boundary conditions of the diamond-shaped areas;
TSQ(Pk,B)、ISQ(Pkb) and FSQ(PkB) each is a radical corresponding to PkMetric values of mesogenial membership, uncertainty and non-membership based on square zone boundary conditions for tuples and zones B.
The method for automatically selecting the Zhongzhi segmentation parameters for strengthening the target attributes of the specific area classes has the following beneficial effects:
(1) the invention provides a parameter selection method based on specific region class target attribute enhancement, which can be widely used for image segmentation, is suitable for most of the existing segmentation algorithms, and facilitates method migration.
(2) The method utilizes the intelligent theory, introduces the uncertain fuzzy measurement of the specific area type target based on the diamond and square areas, strengthens the specific area type target attribute, effectively reduces the maximization instability only depending on a single area type scale, and improves the selection effectiveness of the segmentation parameters.
Drawings
FIG. 1 is a flowchart illustrating a method for automatically selecting a segmentation parameter for enhancing the target attribute of a specific area class according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of diamond boundaries according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a square boundary according to an embodiment of the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
As shown in FIG. 1, the present invention relates to an automatic selection method of a Zhongzhi segmentation parameter for enhancing the target attribute of a specific area class, which mainly comprises the following steps:
(1) acquiring image data;
(2) selecting a specific area B and determining the position of an area frame;
(3) selecting and initializing a plurality of image segmentation parameter tuples;
(4) respectively calculating corresponding image segmentation results according to the image segmentation parameter tuples;
(5) calculating intelligent membership, uncertainty and non-membership in the specific area class targets based on the diamond and square area boundaries under each segmentation result;
(6) calculating the intelligent similarity in the class target attributes corresponding to each segmentation result;
(7) and selecting a segmentation parameter tuple corresponding to the maximum value of the mesopic similarity as a segmentation parameter of the current image according to the principle of the mesopic similarity maximization.
In the step (4): aiming at a Segmentation algorithm, such as an Efficient Graph-Based Segmentation algorithm (Efficient Graph-Based Image Segmentation), parameters which can significantly influence the Segmentation result and have large differences are selected to form Segmentation parameter tuples, and each group of parameters is configured to form a tuple PkK tuples are configured. Based on the K tuples, reading the image data, and calculating to obtain corresponding K segmentation results.
In the step (5): intelligent membership degree T in specific area class target based on diamond area boundaryDIA(Pk,B)=sk(c) Degree of uncertaintyDegree of non-membership FDIA(Pk,B)=1-TDIA(Pk,B)。TDIA(Pk,B)、IDIA(PkB) and FDIA(PkB) each is a radical corresponding to PkMeasuring values of the intelligent membership degree, the uncertainty degree and the non-membership degree of the tuple and the interest area B based on the boundary condition of the diamond area, wherein B is a certain specific rectangular area on the image, sk(c) Corresponding to regions B and PkThe category scale of the tuple segmentation result, wherein c is the central point of B; diarj(α) the vertex is a pixel point on the diamond boundary with the distance of α pixel points from the right left, right and down of c, the diamond boundary is schematically shown in figure 2, each square in figure 2 represents a pixel point, the diamond vertex at the right left of c (shown as the filled square in figure 2) is used as a starting point, a pixel point is extracted every β pixel points clockwise along the diamond region boundary, and the dira is extractedj(α) is the j th extracted pixel point, N is the total number of extracted pixel points, sk(diarj(α)) means based on PkTuple segmentation result in diarj(α) a class scale corresponding to a rectangular region centered on the region having the same width and height as B.
Based on PkClass scale corresponding to tuple segmentation result and rectangular region R with x as center pointSkRepresentation based on PkSet of all the partitioned areas of tuple partitioning, siRepresents the ith division region, | siR is the number of image pixel points outside the R region in the ith division region, | si∩ R | is the number of image pixels in the I-th partition region falling in the R region, and | R | is the total number of pixels contained in R.
In the step (5): calculating the intelligent membership T in the specific region class target based on the square region boundary under each segmentation resultSQ(Pk,B)=sk(c) Degree of uncertaintyDegree of non-membership FSQ(Pk,B)=1-TSQ(Pk,B),TSQ(Pk,B)、ISQ(PkB) and FSQ(PkB) each is a radical corresponding to PkMeasurement values of intelligent membership, uncertainty and non-membership of tuples and interest areas B based on the boundary condition of square areas, wherein B is a certain specific rectangular area on an image, sk(c) Corresponding to regions B and PkThe category scale of the tuple segmentation result, wherein c is the central point of B; sqrj(α, gamma) is a pixel point on the square boundary with c as the center point and the side length of 2 α +1, the square boundary is schematically shown in figure 3, each square in figure 3 represents a pixel point, the upper left vertex of the square (shown as the upper left vertex filled square in figure 3) is used as the starting point, a pixel point is extracted every gamma pixel points clockwise along the square region boundary, and sqr is extractedj(α, gamma) is the j th extracted pixel point, M is the total number of extracted pixel points, sk(sqrj(α, γ)) means based on PkTuple segmentation result in sqrj(α, γ) is the class scale corresponding to the rectangular region centered, which has the same width and height as B.
In the step (6): the mesopic similarity for each segmentation result according to the following formula:
nll (P) thereinkB) is corresponding to PkMesoscopic similarity, w, of tuples and regions BDIA∈[0,1],wSQ∈[0,1]And wDIA+wSQ1. Respectively calculating the mesopic similarity corresponding to K tuples, wherein the mesopic similarity has the maximum value PkThe tuple is selectedIs used to partition the parameter tuples.
The invention is further illustrated by the following specific examples.
The method comprises the following steps: and erecting a network camera in the monitoring area, and transmitting the image data acquired by the network camera to a computer terminal in real time.
Step two: the computer terminal reads the image data transmitted by the camera in real time in RGB format, and selects the area B to be strengthened in the image by using a rectangular frame.
Step three: an Efficient Graph-Based segmentation algorithm (Efficient Graph-Based image segmentation) is adopted as the image segmentation algorithm of the embodiment, and k, sigma and m in the algorithm are used as sensitive parameters, wherein k is used for controlling the size of the combined region, sigma is a Gaussian kernel for carrying out Gaussian blur on the original image before segmentation, and m is a small region combination threshold. In this embodiment, three image segmentation parameter tuples are selected, which are { k is 450, σ is 0.4, m is 150}, { k is 500, σ is 0.5, m is 200}, and { k is 550, σ is 0.6, and m is 250}, and three image segmentation parameter tuples are respectively adopted to segment to obtain corresponding segmentation results.
Step four: calculating the mesopic membership degree T under each segmentation result based on the specific area class target condition attribute of the diamond area boundaryDIA(Pk,B)=sk(c) Degree of uncertaintyDegree of non-membership FDIA(Pk,B)=1-TDIA(Pk,B)。TDIA(Pk,B)、IDIA(PkB) and FDIA(PkB) each is a radical corresponding to PkMeasuring values of the intelligent membership degree, the uncertainty degree and the non-membership degree of the tuple and the interest area B based on the boundary condition of the diamond area, wherein B is a certain specific rectangular area on the image, sk(c) Corresponding to region B at PkThe category scale of the tuple segmentation result, wherein c is the central point of B; diarj(α) is a pixel point whose vertex is respectively located on the diamond boundary with the distance of α pixel points from the right left, right and down of c, and the diamond vertex at the right left of c is taken as the starting pointClockwise extracting a pixel point, dirar, every β pixel points along the diamond boundaryj(α) is the j th extracted pixel point, N is the total number of extracted pixel points, sk(diarj(α)) means based on PkTuple segmentation result in diarj(α) the class scale corresponding to the rectangular region centered, which is as wide and high as B. both α and β are set to 8 in this example the diamond region boundaries are shown in FIG. 2.
Based on PkClass scale corresponding to tuple segmentation result and rectangular region R with x as center pointSkRepresentation based on PkSet of all the partitioned areas of tuple partitioning, siRepresents the ith division region, | siR is the number of image pixel points outside the R region in the ith division region, | si∩ R | is the number of image pixels in the I-th partition region falling in the R region, and | R | is the total number of pixels contained in R.
Step five: calculating the mesopic membership degree T under each segmentation result based on the specific region class target condition attribute of the square region boundarySQ(Pk,B)=sk(c) Degree of uncertaintyDegree of non-membership FSQ(Pk,B)=1-TSQ(Pk,B),TSQ(Pk,B)、ISQ(PkB) and FSQ(PkB) each is a radical corresponding to PkMeasurement values of intelligent membership, uncertainty and non-membership of tuples and interest areas B based on the boundary condition of square areas, wherein B is a certain specific rectangular area on an image, sk(c) Corresponding to region B at PkThe category scale of the tuple segmentation result, wherein c is the central point of B; sqrj(α, gamma) is a pixel point on the square boundary with c as the center point and the side length of 2 α +1, and clockwise along the boundary of the square area with the top left vertex of the square as the starting pointExtracting a pixel point, sqr, from every gamma pixel pointsj(α, gamma) is the j th extracted pixel point, M is the total number of extracted pixel points, sk(sqrj(α, γ)) means based on PkTuple segmentation result in sqrj(α, γ) as the center, which has the same width and height as B. α is set to 8 and γ is set to 16 in this embodiment.the square region boundary is shown in FIG. 3.
Step six: computing a mesopic similarity for each segmented result
Nll (P) thereinkB) is corresponding to PkMesoscopic similarity, w, of tuples and regions BDIA∈[0,1],wSQ∈[0,1]And wDIA+wSQ1. Respectively calculating the mesopic similarity corresponding to K tuples, wherein the mesopic similarity has the maximum value PkThe tuple is the selected partition parameter tuple. In this example wDIAIs set to 0.5, wSQIs set to 0.5.
The method for automatically selecting the Zhongzhi segmentation parameters for strengthening the target attributes of the specific area classes has the following beneficial effects:
(1) the invention provides a parameter selection method based on specific region class target attribute enhancement, which can be widely used for image segmentation, is suitable for most of the existing segmentation algorithms, and facilitates method migration.
(2) The method utilizes the intelligent theory, introduces the uncertain fuzzy measurement of the specific area type target based on the diamond and square areas, strengthens the specific area type target attribute, effectively reduces the maximization instability only depending on a single area type scale, and improves the selection effectiveness of the segmentation parameters.
In this specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Claims (6)
1. An intelligent segmentation parameter automatic selection method for strengthening target attributes of specific area classes is characterized by comprising the following steps:
(1) acquiring image data and selecting a specific area B to be enhanced, wherein the area B is a certain specific rectangular area on an image;
(2) setting different image segmentation parameter tuples and calculating segmentation results based on the different parameter tuples on the image data;
(3) calculating the measurement of intelligent membership, uncertainty and non-membership in the B-type target of the specific area based on the diamond area boundary under each segmentation result;
(4) calculating the measurement of intelligent membership, uncertainty and non-membership in the B-type target of the specific area based on the square area boundary under each segmentation result;
(5) and (4) calculating the mesopic similarity between each image segmentation parameter tuple and the specific region B based on the calculation results of the step (3) and the step (4), and determining the segmentation parameter tuple suitable for the current image distribution.
2. The method as claimed in claim 1, wherein the step (2) of setting different image segmentation parameter tuples includes the steps of:
aiming at the segmentation algorithm, parameters are selected to form segmentation parameter tuples, and each set of parameter configuration forms a tuple PkAnd configuring K tuples, and calculating to obtain corresponding K segmentation results based on the K tuples.
3. The method according to claim 2, wherein in the step (3), the measurement of the intelligent membership, uncertainty and non-membership of the B-type target in the specific area based on the diamond-shaped area boundary under each segmentation result is calculated according to the following formula:
FDIA(Pk,B)=1-TDIA(Pk,B)
wherein, TDIA(Pk,B)、IDIA(PkB) and FDIA(PkB) each is a radical corresponding to PkMeasurement values of mesointelligent membership, uncertainty and non-membership of tuples and specific areas B based on diamond area boundary conditions, sk(c) Are regions B and PkThe category scale of the tuple segmentation result, c is the central point of the area B; diarj(α) is a pixel point whose vertex is respectively positioned on the boundary of the diamond region with the distance of α pixel points from the positive left, the positive upper, the positive right and the positive lower of c, the diamond vertex at the positive left of c is taken as a starting point, a pixel point is extracted every β pixel points clockwise along the boundary of the diamond region, and the dira isj(α) is the j th extracted pixel point, N is the total number of extracted pixel points, sk(diarj(α)) means based on PkTuple segmentation result in diarj(α) a class scale corresponding to a centered rectangular region having the same width and height as region B.
4. The method as claimed in claim 3, wherein the step (3) further comprises the following steps:
calculation of P-base according to the following formulakClass scale s corresponding to rectangular region R with x as center point as tuple division resultk(x):
Wherein S iskRepresentation based on PkSet of all the partitioned areas of tuple partitioning, siIndicates the ith division region therein,|siR is the number of image pixel points outside the R region in the ith division region, | si∩ R | is the number of image pixels in the I-th partition region falling in the R region, and | R | is the total number of pixels contained in R.
5. The method as claimed in claim 2, wherein in the step (4), the measures of the intellectual membership, uncertainty and non-membership of the target class B in the specific area based on the boundary of the square area under each segmentation result are calculated according to the following formulas:
FSQ(Pk,B)=1-TSQ(Pk,B)
wherein, TSQ(Pk,B)、ISQ(PkB) and FSQ(PkB) each is a radical corresponding to PkMeasurement values of intelligent membership, uncertainty and non-membership of tuples and areas B based on the boundary condition of square areas, wherein the area B is a certain specific rectangular area on an image, sk(c) Are regions B and PkThe category scale of the tuple segmentation result, c is the central point of the area B; sqrj(α, gamma) is a pixel point on the square boundary with c as the central point and the side length of 2 α +1, a pixel point is extracted every gamma pixel points clockwise along the square region boundary by taking the top left vertex of the square as the starting point, and sqrj(α, gamma) is the j th extracted pixel point, M is the total number of extracted pixel points, sk(sqrj(α, γ)) means based on PkTuple segmentation result in sqrj(α, γ) as a center, which has the same width and height as region B.
6. The method as claimed in claim 4, wherein in the step (5), the mesopic similarity between each image segmentation parameter tuple and the specific region B is calculated according to the following formula:
wherein nll (P)kB) is corresponding to PkMesoscopic similarity, w, of tuples and regions BDIA∈[0,1],wSQ∈[0,1]And wDIA+wSQRespectively calculating the mesopic similarity corresponding to K tuples (1), wherein the mesopic similarity has the maximum value PkThe tuple is the selected segmentation parameter tuple;
wherein, TDIA(Pk,B)、IDIA(PkB) and FDIA(PkB) each is a radical corresponding to PkMeasuring values of the tuples and the specific area B based on the intelligent membership, uncertainty and non-membership of the boundary conditions of the diamond-shaped areas;
TSQ(Pk,B)、ISQ(Pkb) and FSQ(PkB) each is a radical corresponding to PkMetric values of mesogenial membership, uncertainty and non-membership based on square zone boundary conditions for tuples and zones B.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811633926.9A CN109741345B (en) | 2018-12-29 | 2018-12-29 | Automatic selection method of intelligent segmentation parameters for strengthening target attributes of specific area classes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811633926.9A CN109741345B (en) | 2018-12-29 | 2018-12-29 | Automatic selection method of intelligent segmentation parameters for strengthening target attributes of specific area classes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109741345A CN109741345A (en) | 2019-05-10 |
CN109741345B true CN109741345B (en) | 2020-09-15 |
Family
ID=66362212
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811633926.9A Active CN109741345B (en) | 2018-12-29 | 2018-12-29 | Automatic selection method of intelligent segmentation parameters for strengthening target attributes of specific area classes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109741345B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462144B (en) * | 2020-03-30 | 2023-07-21 | 南昌工程学院 | Image segmentation method for rapidly inhibiting image fuzzy boundary based on rough set |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023245A (en) * | 2016-04-28 | 2016-10-12 | 绍兴文理学院 | Static background moving object detection method based on neutrosophy set similarity measurement |
CN108492313A (en) * | 2018-02-05 | 2018-09-04 | 绍兴文理学院 | A kind of dimension self-adaption visual target tracking method based on middle intelligence similarity measure |
US10115197B1 (en) * | 2017-06-06 | 2018-10-30 | Imam Abdulrahman Bin Faisal University | Apparatus and method for lesions segmentation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103942799B (en) * | 2014-04-25 | 2017-02-01 | 哈尔滨医科大学 | Breast ultrasounography image segmentation method and system |
-
2018
- 2018-12-29 CN CN201811633926.9A patent/CN109741345B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023245A (en) * | 2016-04-28 | 2016-10-12 | 绍兴文理学院 | Static background moving object detection method based on neutrosophy set similarity measurement |
US10115197B1 (en) * | 2017-06-06 | 2018-10-30 | Imam Abdulrahman Bin Faisal University | Apparatus and method for lesions segmentation |
CN108492313A (en) * | 2018-02-05 | 2018-09-04 | 绍兴文理学院 | A kind of dimension self-adaption visual target tracking method based on middle intelligence similarity measure |
Non-Patent Citations (5)
Title |
---|
A novel object tracking algorithm by fusing color and depth information based on single valued neutrosophic cross-entropy;Keli Hu et al;《Journal of Intelligent & Fuzzy Systems》;20171231;第32卷(第3期);第1775-1786页 * |
Measuring the Objectness of Image Windows;Bogdan Alexe et al;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20121130;第34卷(第11期);第2189-2202页 * |
Multi-period medical diagnosis method using a single valued neutrosophic similarity measure based on tangent function;Jun Ye et al;《computer methods and programs in biomedicine》;20161231;第142-149页 * |
Neutrosophic Similarity Score Based Weighted Histogram for Robust Mean-Shift Tracking;Keli Hu et al;《Information》;20171002;第8卷(第4期);第1-13页 * |
基于中智加权相似度量的尺度自适应视觉目标跟踪算法;胡珂立 等;《电信科学》;20180531(第5期);第50-62页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109741345A (en) | 2019-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhu et al. | A three-pathway psychobiological framework of salient object detection using stereoscopic technology | |
CN106875406B (en) | Image-guided video semantic object segmentation method and device | |
Li et al. | Robust visual tracking based on convolutional features with illumination and occlusion handing | |
Rahtu et al. | Learning a category independent object detection cascade | |
CN109903331B (en) | Convolutional neural network target detection method based on RGB-D camera | |
JP6330385B2 (en) | Image processing apparatus, image processing method, and program | |
Li et al. | A correlative classifiers approach based on particle filter and sample set for tracking occluded target | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CN109255375A (en) | Panoramic picture method for checking object based on deep learning | |
WO2019071976A1 (en) | Panoramic image saliency detection method based on regional growth and eye movement model | |
CN109272016A (en) | Object detection method, device, terminal device and computer readable storage medium | |
Gui et al. | A new method for soybean leaf disease detection based on modified salient regions | |
CN107886507B (en) | A kind of salient region detecting method based on image background and spatial position | |
CN106504255A (en) | A kind of multi-Target Image joint dividing method based on multi-tag multi-instance learning | |
CN107944437B (en) | A kind of Face detection method based on neural network and integral image | |
CN109685045A (en) | A kind of Moving Targets Based on Video Streams tracking and system | |
CN111583220A (en) | Image data detection method and device | |
CN110827312A (en) | Learning method based on cooperative visual attention neural network | |
CN109271848A (en) | A kind of method for detecting human face and human face detection device, storage medium | |
CN105184771A (en) | Adaptive moving target detection system and detection method | |
CN115937552A (en) | Image matching method based on fusion of manual features and depth features | |
CN107392211B (en) | Salient target detection method based on visual sparse cognition | |
CN111160107B (en) | Dynamic region detection method based on feature matching | |
CN108846845B (en) | SAR image segmentation method based on thumbnail and hierarchical fuzzy clustering | |
CN109741345B (en) | Automatic selection method of intelligent segmentation parameters for strengthening target attributes of specific area classes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |