CN114266298B - Image segmentation method and system based on consistent manifold approximation and projection clustering integration - Google Patents

Image segmentation method and system based on consistent manifold approximation and projection clustering integration Download PDF

Info

Publication number
CN114266298B
CN114266298B CN202111540358.XA CN202111540358A CN114266298B CN 114266298 B CN114266298 B CN 114266298B CN 202111540358 A CN202111540358 A CN 202111540358A CN 114266298 B CN114266298 B CN 114266298B
Authority
CN
China
Prior art keywords
acquiring
record
risk
recording
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111540358.XA
Other languages
Chinese (zh)
Other versions
CN114266298A (en
Inventor
王江峰
徐森
徐秀芳
花小朋
皋军
安晶
嵇宏伟
姜陈雨
陆湘文
陈思博
蔡娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yancheng Institute of Technology
Yancheng Institute of Technology Technology Transfer Center Co Ltd
Original Assignee
Yancheng Institute of Technology
Yancheng Institute of Technology Technology Transfer Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yancheng Institute of Technology, Yancheng Institute of Technology Technology Transfer Center Co Ltd filed Critical Yancheng Institute of Technology
Priority to CN202111540358.XA priority Critical patent/CN114266298B/en
Publication of CN114266298A publication Critical patent/CN114266298A/en
Application granted granted Critical
Publication of CN114266298B publication Critical patent/CN114266298B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an image segmentation method and system based on consistent manifold approximation and projection clustering integration, wherein the method comprises the following steps: step S1: acquiring original image data; step S2: preprocessing original image data; step S3: reducing the dimension of the preprocessing result based on consistent manifold approximation and projection; step S4: and carrying out clustering integration on the dimensionality reduction result to obtain a first image segmentation result and outputting the first image segmentation result. According to the image segmentation method and system based on the consistent manifold approximation and projection clustering integration, the dimension reduction is performed on the original image data through the consistent manifold approximation and projection, the dimension reduction result after the dimension reduction is subjected to clustering integration, and when the image is segmented, the segmentation speed and the segmentation quality are improved.

Description

Image segmentation method and system based on consistent manifold approximation and projection clustering integration
Technical Field
The invention relates to the technical field of image processing, in particular to an image segmentation method and system based on uniform manifold approximation and projection clustering integration.
Background
As a basis for computer vision and image processing applications, image segmentation is used to divide an image into several regions with specific and unique attributes, pixels of the same region have strong similarity, while pixels of different regions have large difference; meanwhile, image segmentation is an important step for further understanding of images.
Many image segmentation methods are currently designed and applied, including edge detection-based segmentation, region-based segmentation, and specific theory-based segmentation;
however, when the image is divided by the method, the problems of low dividing speed, insufficient dividing quality and the like exist;
therefore, a solution is needed.
Disclosure of Invention
One of the purposes of the invention is to provide an image segmentation method and system based on consistent manifold approximation and projection clustering integration, which are used for carrying out dimension reduction on original image data through consistent manifold approximation and projection, carrying out clustering integration on dimension reduction results after dimension reduction, and improving the segmentation speed and the segmentation quality when an image is segmented.
The embodiment of the invention provides an image segmentation method based on consistent manifold approximation and projection clustering integration, which comprises the following steps:
step S1: acquiring original image data;
step S2: preprocessing original image data;
step S3: reducing the dimension of the preprocessing result based on consistent manifold approximation and projection;
step S4: and carrying out clustering integration on the dimensionality reduction result to obtain a first image segmentation result, and outputting the first image segmentation result.
Preferably, step S2: pre-processing raw image data, comprising:
step S201: filtering the original image data;
step S202: denoising the filtering result;
step S203: carrying out gray level correction on the denoising result;
step S204: and converting the gray correction result into a digital form to obtain a preprocessing result.
Preferably, in step S4, performing cluster integration on the dimensionality reduction result to obtain a first image segmentation result, including:
step S401: clustering the dimensionality reduction result by using a K-means algorithm to obtain l cluster members;
step S402: selecting a preset number of cluster members from the l cluster members;
step S403: and carrying out clustering integration on the selected clustering members to obtain a first image segmentation result.
Preferably, before outputting the first image segmentation result in step S4, the method further includes:
training an image segmentation result correction model, and correcting the first image segmentation result based on the image segmentation result correction model;
the training image segmentation result correction model comprises the following steps:
acquiring a plurality of correction records for manually correcting a plurality of second image segmentation results;
extracting a first entry from the revised record;
acquiring a recorder type of a recorder recording a first record item, wherein the recorder type comprises: human and machine;
when the type of a recorder recording the first record item is man-hour, acquiring a first recording time range and a first recording scene of the recorder recording the first record item;
inquiring a preset recording scene library, determining a plurality of first recording scenes which occur in a first recording time range and correspond to a first recording scene, and simultaneously acquiring generation time points of the first recording scenes;
acquiring a preset field simulation space, and sequentially mapping a first recording field in the field simulation space according to time sequence based on a generation time point to obtain a dynamic recording field model;
determining a first human body model corresponding to a recording party in the dynamic model of the recording site;
identifying, by a behavior identification technique, a plurality of first behaviors of a first human body model generated within a record field dynamic model;
acquiring the record type of the first record item, and acquiring a preset behavior value degree judgment library corresponding to the record type;
judging a first price degree of the first behavior based on a behavior value degree judging library;
if the first price degree is larger than or equal to a preset first price degree threshold, taking the corresponding first behavior as a second behavior;
analyzing the second behavior by an attention analysis technology, and analyzing at least one second human body model which is focused by the first human body model in the recording field dynamic model and a first concentration ratio of focusing attention on the second human body model;
acquiring a first identity of the second human body model, and acquiring a person recording the first record item;
acquiring a second identity of the person to be recorded;
pairing the first identity with the second identity one by one, and if the first identity and the second identity cannot be successfully paired one by one, rejecting the corresponding first record item;
otherwise, counting the number of the models of the second human body model;
when the number of the models is 1, if the first concentration is less than or equal to a preset first concentration threshold value, rejecting a corresponding first record item;
when the model data is larger than 1, acquiring the contribution degree of the person in record corresponding to the first record item;
acquiring a second concentration threshold corresponding to the contribution degree;
determining a first concentration corresponding to the record person based on the first identity and the second identity, and taking the first concentration as a second concentration;
if the second concentration is smaller than the corresponding second concentration threshold, rejecting the corresponding first record item;
when the recorder type of the recorder recording the first recording item is a machine, acquiring a second recording time range and a second recording scene of the recorder recording the first recording item;
querying a preset credibility dynamic evaluation library, and determining a plurality of dynamic credibility of the second recording scene within the second recording time range;
if the dynamic credibility is less than or equal to a preset credibility threshold, rejecting the corresponding first record item;
when the first record items needing to be removed from the corrected record are all removed, the remaining first record items are removed from the corrected record to serve as second record items;
integrating the second record items to obtain a record to be evaluated;
acquiring a preset record value degree evaluation model, and evaluating the value degree of the record to be evaluated based on the record value degree evaluation model to obtain a second value degree;
if the second valence is greater than or equal to a preset second valence threshold, rejecting the corresponding correction record, and taking the corresponding record to be evaluated as a first training sample;
otherwise, only removing the corresponding correction record;
when all correction records needing to be removed are removed, taking the removed residual correction records as a second training sample;
and acquiring a preset neural network training model, inputting the first training sample and the second training sample into the neural network training model for model training, and acquiring an image segmentation result correction model.
Preferably, obtaining a plurality of correction records for manually correcting the segmentation results of the plurality of second images includes:
acquiring a preset acquisition path set, wherein the acquisition path set comprises: a plurality of first acquisition paths;
acquiring a plurality of first sources corresponding to a first acquisition path;
acquiring a credit record corresponding to a first source;
analyzing the content of the credit record to obtain a credit value;
if the credit value is less than or equal to a preset credit value threshold value, rejecting the corresponding first source;
when first sources needing to be removed in the first sources are all removed, the remaining first sources which are removed are used as second sources;
acquiring a first guarantee circle corresponding to a second source;
randomly selecting n second sources and combining the second sources into a judgment target set;
judging whether second sources in the target set are all in any first guarantee ring or not, and if so, taking the corresponding first guarantee ring as a second guarantee ring;
acquiring a preset first risk identification model, inputting a second source and a second guarantee ring in a judgment target set into the first risk identification model, and acquiring a first risk degree corresponding to the second source;
after the second source is randomly selected, summarizing first risk degrees corresponding to the second source to obtain a first risk degree sum;
if the first risk degree sum is larger than or equal to a preset first risk degree sum threshold value, rejecting a corresponding second source;
when second sources needing to be removed in the second sources are all removed, the remaining second sources which are removed are used as third sources;
obtaining a providing strategy corresponding to a third source;
carrying out strategy splitting on a providing strategy to obtain a plurality of first strategy items;
randomly selecting a first strategy item, and extracting content features to obtain a plurality of first content features;
acquiring a preset risk content feature library, matching the first content features with the first risk content features in the risk content feature library, taking the corresponding first strategy item as a second strategy item when the matching is in accordance, taking the matched first content features as second content features, and simultaneously taking the matched first risk content features as second risk content features;
acquiring a plurality of first associated risk content characteristics corresponding to the second risk content characteristics;
matching third content features except the second content features in the first content features with the first associated risk content features, taking the corresponding first strategy item as a third strategy item when matching is met every time, and taking the matched first associated risk content features as second associated risk content features;
acquiring a preset providing simulation environment corresponding to the second risk content characteristic and the second associated risk content characteristic;
acquiring a first execution scene corresponding to a second strategy item, and acquiring a second execution scene corresponding to a third strategy item;
mapping the first execution scenario and the second execution scenario respectively within a rendering simulation environment;
performing simulated offering within the offered simulated environment based on the second policy item and the third policy item;
in the process of providing simulation, acquiring a preset second risk identification model, and attempting to identify a providing risk generated in a providing simulation environment based on the second risk identification model;
if the identification is successful, acquiring a second risk degree corresponding to the identified risk type providing the risk, and associating the second risk degree with a corresponding third source;
after the first strategy item is randomly selected, summarizing a second risk degree associated with a third source to obtain a second risk degree sum;
if the sum of the second risk degrees is greater than or equal to a preset second risk degree and a threshold value, rejecting a corresponding third source;
when all third sources needing to be removed in the third sources are removed, taking the remaining third sources as fourth sources, and meanwhile, counting the number of the fourth sources;
when the number of the sources is 0, rejecting the corresponding first acquisition path;
otherwise, the fourth source is corresponded to the corresponding first acquisition path again to obtain a second acquisition path;
and acquiring a plurality of correction records for manually correcting the segmentation results of the second images through a second acquisition path.
The embodiment of the invention provides an image segmentation system based on the integration of consistent manifold approximation and projection clustering, which comprises:
the acquisition module is used for acquiring original image data;
the preprocessing module is used for preprocessing the original image data;
the dimensionality reduction module is used for reducing dimensionality of the preprocessing result based on the consistent manifold approximation and projection;
and the clustering integration module is used for clustering and integrating the dimensionality reduction result to obtain a first image segmentation result and outputting the first image segmentation result.
Preferably, the preprocessing module performs the following operations:
filtering the original image data;
denoising the filtering result;
carrying out gray level correction on the denoising result;
and converting the gray correction result into a digital form to obtain a preprocessing result.
Preferably, the clustering module performs the following operations:
clustering the dimensionality reduction result by using a K-means algorithm to obtain l cluster members;
selecting a preset number of cluster members from the l cluster members;
and carrying out clustering integration on the selected clustering members to obtain a first image segmentation result.
Preferably, the clustering module performs the following operations:
training an image segmentation result correction model, and correcting the first image segmentation result based on the image segmentation result correction model;
the training image segmentation result correction model comprises the following steps:
acquiring a plurality of correction records for manually correcting a plurality of second image segmentation results;
extracting a first entry from the revised record;
acquiring a recorder type of a recorder recording a first record item, wherein the recorder type comprises: human and machine;
when the type of a recorder recording the first record item is man-hour, acquiring a first recording time range and a first recording scene of the recorder recording the first record item;
inquiring a preset recording site library, determining a plurality of first recording sites which occur within a first recording time range and correspond to a first recording scene, and simultaneously acquiring generation time points of the first recording sites;
acquiring a preset field simulation space, and sequentially mapping a first recording field in the field simulation space according to time sequence based on a generation time point to obtain a dynamic recording field model;
determining a first human body model corresponding to a recording party in the dynamic model of the recording site;
identifying, by a behavior identification technique, a plurality of first behaviors of a first human body model generated within a recorded live dynamic model;
acquiring the record type of the first record item, and acquiring a preset behavior value degree judgment library corresponding to the record type;
judging a first price degree of the first behavior based on the behavior value degree judging library;
if the first price degree is larger than or equal to a preset first price degree threshold, taking the corresponding first behavior as a second behavior;
analyzing the second behavior by an attention analysis technology, and analyzing at least one second human body model which is focused by the first human body model in the recording field dynamic model and a first concentration ratio of focusing attention on the second human body model;
acquiring a first identity of the second human body model, and acquiring a person recording the first record item;
acquiring a second identity of the person to be recorded;
pairing the first identity with the second identity one by one, and if the first identity and the second identity cannot be successfully paired one by one, rejecting the corresponding first record item;
otherwise, counting the number of models of the second human body model;
when the number of the models is 1, if the first concentration is less than or equal to a preset first concentration threshold value, rejecting a corresponding first record item;
when the model data is larger than 1, acquiring the contribution degree of the person in record corresponding to the first record item;
acquiring a second concentration threshold corresponding to the contribution degree;
determining a first concentration corresponding to the record persons based on the first identity and the second identity, and taking the first concentration as a second concentration;
if the second concentration is smaller than the corresponding second concentration threshold, rejecting the corresponding first record item;
when the recorder type of the recorder recording the first recording item is a machine, acquiring a second recording time range and a second recording scene of the recorder recording the first recording item;
querying a preset credibility dynamic evaluation library, and determining a plurality of dynamic credibility of the second recording scene within the second recording time range;
if the dynamic credibility is less than or equal to a preset credibility threshold, rejecting the corresponding first record item;
when the first record items needing to be removed from the correction record are all removed, removing the remaining first record items from the correction record to be used as second record items;
integrating the second record item to obtain a record to be evaluated;
acquiring a preset record value degree evaluation model, and evaluating the value degree of the record to be evaluated based on the record value degree evaluation model to obtain a second value degree;
if the second valence is greater than or equal to a preset second valence threshold, rejecting the corresponding correction record, and taking the corresponding record to be evaluated as a first training sample;
otherwise, only rejecting the corresponding correction record;
when all correction records needing to be removed are removed, taking the removed residual correction records as a second training sample;
and acquiring a preset neural network training model, inputting the first training sample and the second training sample into the neural network training model for model training, and acquiring an image segmentation result correction model.
Preferably, the clustering module performs the following operations:
acquiring a preset acquisition path set, wherein the acquisition path set comprises: a plurality of first acquisition paths;
acquiring a plurality of first sources corresponding to a first acquisition path;
acquiring a credit record corresponding to a first source;
analyzing the content of the credit record to obtain a credit value;
if the credit value is less than or equal to a preset credit value threshold value, rejecting the corresponding first source;
when first sources needing to be removed in the first sources are all removed, the remaining first sources which are removed are used as second sources;
acquiring a first guarantee circle corresponding to a second source;
randomly selecting n second sources and combining the second sources into a judgment target set;
judging whether second sources in the target set are all in any first guarantee ring, and if so, taking the corresponding first guarantee ring as a second guarantee ring;
acquiring a preset first risk identification model, inputting a second source and a second guarantee ring in a judgment target set into the first risk identification model, and acquiring a first risk degree corresponding to the second source;
after the second source is randomly selected, summarizing first risk degrees corresponding to the second source to obtain a first risk degree sum;
if the first risk degree sum is larger than or equal to a preset first risk degree sum threshold value, rejecting a corresponding second source;
when second sources needing to be removed in the second sources are all removed, the remaining second sources which are removed are used as third sources;
obtaining a providing strategy corresponding to a third source;
carrying out strategy splitting on a provided strategy to obtain a plurality of first strategy items;
randomly selecting a first strategy item, and extracting content features to obtain a plurality of first content features;
acquiring a preset risk content feature library, matching the first content features with the first risk content features in the risk content feature library, taking the corresponding first strategy item as a second strategy item when matching is consistent, taking the matched first content features as second content features, and simultaneously taking the matched first risk content features as second risk content features;
acquiring a plurality of first associated risk content characteristics corresponding to the second risk content characteristics;
matching third content features except the second content features in the first content features with the first associated risk content features, taking the corresponding first strategy item as a third strategy item when matching is met every time, and taking the matched first associated risk content features as second associated risk content features;
acquiring a preset providing simulation environment corresponding to the second risk content characteristic and the second associated risk content characteristic;
acquiring a first execution scene corresponding to a second strategy item, and acquiring a second execution scene corresponding to a third strategy item;
mapping the first execution scenario and the second execution scenario respectively within a rendering simulation environment;
performing simulated offering within the offered simulated environment based on the second policy item and the third policy item;
in the process of providing simulation, acquiring a preset second risk identification model, and attempting to identify a providing risk generated in a providing simulation environment based on the second risk identification model;
if the identification is successful, acquiring a second risk degree corresponding to the identified risk type providing the risk, and associating the second risk degree with a corresponding third source;
after the first strategy item is randomly selected, summarizing a second risk degree associated with a third source to obtain a second risk degree sum;
if the sum of the second risk degrees is greater than or equal to a preset second risk degree and a threshold value, rejecting a corresponding third source;
when all third sources needing to be removed in the third sources are removed, taking the remaining third sources as fourth sources, and meanwhile, counting the number of the fourth sources;
when the number of the sources is 0, rejecting the corresponding first acquisition path;
otherwise, the fourth source and the corresponding first acquisition path are corresponded again to obtain a second acquisition path;
and acquiring a plurality of correction records for manually correcting the second image segmentation results through a second acquisition path.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart of an image segmentation method based on the integration of conformal approximation and projection clustering according to an embodiment of the present invention;
FIG. 2 is a flowchart of another image segmentation method based on the integration of conformal approximation and projection clustering according to an embodiment of the present invention;
FIG. 3 is a flowchart of another image segmentation method based on the integration of conformal approximation and projection clustering according to an embodiment of the present invention;
fig. 4 is a schematic diagram of cluster integration in the embodiment of the present invention.
FIG. 5 is a schematic diagram of an image segmentation system based on the integration of conformal approximation and projection clustering according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The embodiment of the invention provides an image segmentation method based on consistent manifold approximation and projection clustering integration, as shown in fig. 1, the method comprises the following steps:
step S1: acquiring original image data;
step S2: preprocessing original image data;
step S3: reducing the dimension of the preprocessing result based on consistent manifold approximation and projection;
step S4: and performing clustering integration on the dimensionality reduction result to obtain a first image segmentation result, and outputting the first image segmentation result.
The working principle and the beneficial effects of the technical scheme are as follows:
acquiring original image data (for example, an original image which needs to be subjected to image segmentation is input by a user), and preprocessing (filtering, denoising and the like) the original image data; reducing the dimension of the preprocessed result based on consistent manifold approximation and projection (UMAP, a dimension reduction technique); performing clustering integration on the dimensionality reduction result (initializing a rough cluster on the dimensionality reduction result, clustering pixel points with similar characteristics such as color, brightness, texture and the like to the same superpixel in an iteration mode, iterating until convergence is achieved, obtaining the image segmentation result, repeating the clustering process for multiple times, integrating the obtained image segmentation results to obtain a final image segmentation result), and obtaining the image segmentation result, namely finishing image segmentation;
UMAP is built on the theoretical framework of Riemann geometry and algebraic topology, and is based on a solid mathematical principle, unlike pure machine learning semi-empirical algorithms. Compared with other manifold learning methods, UMAP not only can accurately express a local structure, but also can better combine a global structure, and has the advantages of high visualization quality, strong expandability, no computational limitation on embedding dimension and the like. UMAP constructs a topological representation of high-dimensional data using local manifold approximations and stitching together their locally blurred simplistic representations. A similar process is used to construct an equivalent low-dimensional data topology representation. UMAP then optimizes the layout of the data representation in a low-dimensional space to minimize the cross entropy between the two topological representations.
The theoretical basis of UMAP is based on fuzzy simplex set, and from the practical calculation point of view, it can be described by the construction and operation of weighted graph, which is similar to other graph algorithm based on k neighborhood. This type of algorithm is divided into two phases: the first stage is to construct a weighted k neighborhood graph; the second stage is to compute the low-dimensional layout of the graph and optimize the low-dimensional layout based on cross entropy to make it as close as possible to the blurred topological representation of the original layout.
For simplicity, the representation of the low-dimensional embedding space is represented by H and the representation of the UMAP manifold embedding space is represented by Z, where
Figure BDA0003413827290000121
Figure BDA0003413827290000122
Let M be the manifold in which the data is assumed to be, and let M be the riemann metric. Thus, for each point p ∈ M there is an inner product in tangential space. By a distance XiThe distance of nearest neighbors is normalized to approximate from XiGeodesic distance to its neighbors. By for each XiAnd the custom distance is created, so that the effectiveness of uniform distribution on the manifold can be ensured. This yields a set of discrete metric spaces (each X)iOne for each), they are merged into a consistent global structure by converting the metric space into a fuzzy simplex set. The pure set provides a combination method for researching the topological space, and the method is easy to be purifiedAre defined abstractly.
The fuzzy sets of both spaces have the same reference set, denoted by a. The low-dimensional embedding fuzzy set is represented as (A, u), the fuzzy set of the manifold embedding representation is represented as (A, v), and u and v are membership functions corresponding to the fuzzy set. The structure of its weight map (i.e., fuzzy set) is described as follows: given an input hyper-parameter k, for each hiWe calculate under the metric d, hiK nearest neighbor sets h1,h2,...,hnThat calculation can be performed by any nearest neighbor or near-nearest neighbor search algorithm.
The weighted directed graph is defined as
Figure BDA0003413827290000131
Has a vertex V as a set H and a directed edge E as
Figure BDA0003413827290000132
The weight function w is defined as follows:
Figure BDA0003413827290000133
where ρ isiAnd σiAre respectively defined as follows:
Figure BDA0003413827290000134
Figure BDA0003413827290000135
let M be a weighted directed graph
Figure BDA0003413827290000136
The symmetric matrix N of the weighted adjacency matrix of (1) can be obtained by:
Figure BDA0003413827290000139
wherein
Figure BDA00034138272900001310
Representing a hadamard product operation. Diagram of UMAP architecture
Figure BDA0003413827290000137
Is an undirected weighted graph whose adjacency matrix is given by N.
In the second stage of computing the low-dimensional graph layout, UMAP uses a force-directed graph layout algorithm in low-dimensional space. The algorithm iteratively applies attractive and repulsive forces to each edge or vertex. The manifold embedding is optimized by minimizing the cross entropy of the two fuzzy sets, the objective function being defined as follows:
Figure BDA0003413827290000138
the first term in the above equation is locally focused to identify natural clusters in the data set (edge weights of the large graph predominate). The second term is globally focused, ensuring that the data points remain distant in the underlying space (with little edge weight dominating). As can be seen from the above equation, the UMAP can not only identify natural clusters in the data set, but also provide global structural information in the potential space, i.e., infer the similarity and difference between clusters according to the proximity of clusters in the potential space.
UMAP uses random gradient descent (SGD) to replace conventional Gradient Descent (GD) to embed high-dimensional image data into a low-dimensional space, so that the distance between low-dimensional embedding points corresponding to images with low similarity in the high-dimensional space is far, the distance between low-dimensional embedding points corresponding to images with high similarity is close, and compared with other algorithms such as t-SNE and the like, the method not only accelerates the calculation speed, but also reduces the memory consumption, thereby improving the segmentation speed;
an ensemble learning method is used in clustering integration, and ensemble learning can improve prediction precision and reduce generalization errors, so that images can be accurately classified and segmented; segmentation quality is improved by generating a set of weak base learners by ensemble learning attempts, classifying portions of the image, and combining their outputs, rather than attempting to create a single optimal learner;
the embodiment of the invention reduces the dimension of the original image data through consistent manifold approximation and projection, and performs clustering integration on the dimension reduction result after dimension reduction, thereby improving the segmentation speed and the segmentation quality when segmenting the image.
The embodiment of the invention provides an image segmentation method based on consistent manifold approximation and projection clustering integration, as shown in fig. 2, the method comprises the following steps:
step S201: filtering the original image data;
step S202: denoising the filtering result;
step S203: carrying out gray level correction on the denoising result;
step S204: and converting the gray correction result into a digital form to obtain a preprocessing result.
The working principle and the beneficial effects of the technical scheme are as follows:
when filtering, replacing each pixel in the image by the average value calculated by each pixel in the original image data and the pixels around the pixel by using average value filtering, and removing high-frequency signals; when denoising is carried out, an NL-Mean algorithm can be used; the image quality is improved by gray level correction, so that the display effect of the image is clearer; finally, the gray scale correction result is converted into a digital format.
The embodiment of the invention provides an image segmentation method based on consistent manifold approximation and projection clustering integration, which comprises the following steps of:
step S401: clustering the dimensionality reduction result by using a K-means algorithm to obtain l cluster members;
step S402: selecting a preset number of cluster members from the l cluster members;
step S403: and carrying out clustering integration on the selected clustering members to obtain a first image segmentation result.
The working principle and the beneficial effects of the technical scheme are as follows:
clustering the dimensionality reduction result by using a K-means algorithm, and obtaining l cluster members as shown in figure 4; in order to ensure high quality and diversity of the cluster members, a preset number (for example, 20) of cluster members are selected from the cluster members for cluster integration, and then an image segmentation result (clustering result) is obtained.
The embodiment of the invention provides an image segmentation method based on the integration of consistent manifold approximation and projection clustering, wherein in step S4, before outputting a first image segmentation result, the method further comprises:
training an image segmentation result correction model, and correcting the first image segmentation result based on the image segmentation result correction model;
the training image segmentation result correction model comprises the following steps:
acquiring a plurality of correction records for manually correcting a plurality of second image segmentation results;
extracting a first entry from the revised record;
acquiring a recorder type of a recorder recording a first record item, wherein the recorder type comprises: human and machine;
when the type of a recorder recording the first record item is man-hour, acquiring a first recording time range and a first recording scene of the recorder recording the first record item;
inquiring a preset recording site library, determining a plurality of first recording sites which occur within a first recording time range and correspond to a first recording scene, and simultaneously acquiring generation time points of the first recording sites;
acquiring a preset field simulation space, and sequentially mapping a first recording field in the field simulation space according to time sequence based on a generation time point to obtain a dynamic recording field model;
determining a first human body model corresponding to a recording party in the dynamic model of the recording site;
identifying, by a behavior identification technique, a plurality of first behaviors of a first human body model generated within a recorded live dynamic model;
acquiring the record type of the first record item, and acquiring a preset behavior value degree judgment library corresponding to the record type;
judging a first price degree of the first behavior based on the behavior value degree judging library;
if the first price degree is larger than or equal to a preset first price degree threshold, taking the corresponding first behavior as a second behavior;
analyzing the second behavior by an attention analysis technology, and analyzing at least one second human body model which is focused by the first human body model in the recording field dynamic model and a first concentration ratio of focusing attention on the second human body model;
acquiring a first identity of the second human body model, and acquiring a person recording the first record item;
acquiring a second identity of the person to be recorded;
pairing the first identity with the second identity one by one, and if the first identity and the second identity cannot be successfully paired one by one, rejecting the corresponding first record item;
otherwise, counting the number of the models of the second human body model;
when the number of the models is 1, if the first concentration is less than or equal to a preset first concentration threshold value, rejecting a corresponding first record item;
when the model data is larger than 1, acquiring the contribution degree of the person in record corresponding to the first record item;
acquiring a second concentration threshold corresponding to the contribution degree;
determining a first concentration corresponding to the record person based on the first identity and the second identity, and taking the first concentration as a second concentration;
if the second concentration is smaller than the corresponding second concentration threshold, rejecting the corresponding first record item;
when the recorder type of the recorder recording the first recording item is a machine, acquiring a second recording time range and a second recording scene of the recorder recording the first recording item;
querying a preset credibility dynamic evaluation library, and determining a plurality of dynamic credibility of the second recording scene within the second recording time range;
if the dynamic credibility is less than or equal to a preset credibility threshold, rejecting the corresponding first record item;
when the first record items needing to be removed from the corrected record are all removed, the remaining first record items are removed from the corrected record to serve as second record items;
integrating the second record items to obtain a record to be evaluated;
acquiring a preset record value degree evaluation model, and evaluating the value degree of the record to be evaluated based on the record value degree evaluation model to obtain a second value degree;
if the second valence is greater than or equal to a preset second valence threshold, rejecting the corresponding correction record, and taking the corresponding record to be evaluated as a first training sample;
otherwise, only removing the corresponding correction record;
when all correction records needing to be removed are removed, taking the removed residual correction records as a second training sample;
and acquiring a preset neural network training model, inputting the first training sample and the second training sample into the neural network training model for model training, and acquiring an image segmentation result correction model.
The working principle and the beneficial effects of the technical scheme are as follows:
when the image is divided and output (sent to a demand side), manual inspection is generally carried out, and correction is carried out if necessary, when the image division task amount is large, the labor cost of manual inspection and correction is also large, and in addition, when some extremely complex images (such as a person image with a complex background and a large number of persons) are divided, the number of places needing correction is possibly more; in order to reduce the labor cost of manual inspection and correction and further improve the quality of image segmentation, the embodiment of the invention trains the image segmentation result correction model, and the image segmentation result correction model can correct the image segmentation result before the image segmentation result is output;
when an image segmentation result correction model is trained, acquiring a plurality of correction records (records for manually correcting image segmentation results historically), and extracting a first record item in the correction records; the first record item is recorded by a recorder, and the recorder is divided into a manual mode and a machine mode (for example, the correction operation which is manually carried out is only recognized by the machine, and the machine can only recognize the manual operation, and at the moment, in order to improve the recording convenience, a manual recorder is arranged, and the intention of the manual recording operation of the extraction operation is performed [ the pixel block originally belongs to the image background, and the like ]); when the type of the recorder is man-hour, in order to ensure the accuracy and reliability of the corresponding first record item, the process of recording the first record item by the recorder needs to be verified; acquiring a first recording time range and a first recording scene (such as a certain laboratory) of a recorder recording corresponding to a first recording item; inquiring a preset recording site library (comprising a database of a large number of recording sites, wherein the recording sites are three-dimensional information, for example, a depth camera can be arranged in a laboratory to collect indoor three-dimensional information), and determining a corresponding first recording site; sequentially mapping the first recording sites in a preset site simulation space (a three-dimensional simulation space which can be realized based on a BIM technology) according to the generation time sequence, and dynamically fusing the first recording sites during mapping to obtain a dynamic recording site model; determining a first human body model, and recognizing a first behavior (such as a gesture, a facial action, an orientation, and the like); acquiring a record type (such as an operation intention record) of a first record item, acquiring a preset behavior value judgment library (a database containing value degrees corresponding to a large number of behaviors, for example, when a recorder performs operation intention record, main behaviors are human dictation and manual record generated by listening operation, the behavior value degree is high, and the value degrees of other behaviors are low) corresponding to the record type, and determining a first value degree corresponding to the first behavior; rejecting first behaviors with the first value degree smaller than a preset first value degree threshold (constant); performing attention analysis on the second behavior which is eliminated and remained (for example, analyzing who is watched and listened by a recorder frequently, and the like), and determining a second human body model and a corresponding first concentration ratio, wherein the higher the first concentration ratio is, the higher the attention degree is represented; acquiring a first identity of a second human body model (the three-dimensional outlines of all the workers can be stored in advance, and the identity is acquired based on outline matching), and acquiring a second identity of a person (a person generating correction operation) who records the first record item; if the first identity can have a matched second identity, the situation that the records of the recorder are qualified is described, and the number of the models is determined; when the number of the models is 1, only one person is recorded, and at the moment, if the first concentration is less than or equal to a preset first concentration threshold (constant), the attention degree is insufficient, and a corresponding first record item is removed; when the number of the models is more than 1, a plurality of recorders are indicated, the contribution degrees of the recorders corresponding to the first record items are obtained (the first record items are provided by a plurality of persons, the more a person provides, the greater the contribution degree is), a second concentration threshold corresponding to the contribution degrees is obtained (the greater the contribution degree is, the more the person provides, the more the recorder should pay more attention to the recorder), if the second concentration degree is smaller than the second concentration threshold, the attention degree is insufficient, and the corresponding first record items are removed; when the type of the recorder is a machine, acquiring a second recording time range and a second recording scene (such as a local software interface, a cloud software interface and the like); determining a plurality of corresponding dynamic credibility based on a preset credibility dynamic evaluation library (credibility evaluation is carried out on a recording scene at regular time to obtain credibility, and then a database is constructed), and if the dynamic credibility is less than or equal to a preset credibility threshold (constant), indicating that the recording scene is not credible, removing a corresponding first recording item; integrating the second record items with the removed residues, inputting a preset record value degree evaluation model (a model generated by learning a large amount of records which are manually evaluated as the available value of a sample training image segmentation result correction model by using a machine learning algorithm), and obtaining a second value degree; if the second valence is greater than or equal to a preset second valence threshold, rejecting the corresponding correction record, and taking the record to be evaluated as a first training sample; taking some corrected records without the self first record item as second training samples; inputting the first training sample and the second training sample into a neural network training model together for model training, so as to obtain an image segmentation result correction model;
according to the embodiment of the invention, the image segmentation result correction model is trained, and the image segmentation result is checked and corrected, so that the labor cost for manually checking and correcting is reduced, and meanwhile, the quality of the output image segmentation result is improved; in addition, when the image segmentation result is trained to correct the model, the recording process is verified respectively based on the difference of the types of the recording parties of the first recording item, so that the accuracy of a training sample is ensured, and the training quality of the image segmentation result correction model is improved.
In addition, when the second concentration threshold corresponding to the contribution ratio is obtained, the second concentration threshold is obtained through the following formula:
γ=μ·α
Figure BDA0003413827290000191
wherein γ is a second concentration threshold corresponding to the contribution degree, α is the contribution degree, μ is an intermediate variable, ρmaxIs a preset maximum expansion coefficient, rhominIs a preset minimum expansion coefficient, rho is a preset normal expansion coefficient, rhomin<ρ<ρmax,αmax,0Is a preset maximum contribution threshold, alphamin,0Is a preset minimum contribution threshold, alphamin,0<αmax,0
In the formula, the second concentration threshold value gamma should be in positive correlation with the contribution ratio alpha, and the setting is reasonable according to three conditions based on the size of the contribution ratio alpha.
The embodiment of the invention provides an image segmentation method based on consistent manifold approximation and projection clustering integration, which is used for acquiring a plurality of correction records for manually correcting a plurality of second image segmentation results and comprises the following steps:
acquiring a preset acquisition path set, wherein the acquisition path set comprises: a plurality of first acquisition paths;
acquiring a plurality of first sources corresponding to a first acquisition path;
acquiring a credit record corresponding to a first source;
analyzing the content of the credit record to obtain a credit value;
if the credit value is less than or equal to a preset credit value threshold value, rejecting the corresponding first source;
when first sources needing to be removed in the first sources are all removed, the remaining first sources which are removed are used as second sources;
acquiring a first guarantee circle corresponding to a second source;
randomly selecting n second sources and combining the second sources into a judgment target set;
judging whether second sources in the target set are all in any first guarantee ring or not, and if so, taking the corresponding first guarantee ring as a second guarantee ring;
acquiring a preset first risk identification model, inputting a second source and a second guarantee ring in a judgment target set into the first risk identification model, and acquiring a first risk degree corresponding to the second source;
after the second source is randomly selected, summarizing first risk degrees corresponding to the second source to obtain a first risk degree sum;
if the first risk degree sum is larger than or equal to a preset first risk degree sum threshold value, rejecting a corresponding second source;
when second sources needing to be removed in the second sources are all removed, the remaining second sources which are removed are used as third sources;
obtaining a providing strategy corresponding to a third source;
carrying out strategy splitting on a provided strategy to obtain a plurality of first strategy items;
randomly selecting a first strategy item, and extracting content features to obtain a plurality of first content features;
acquiring a preset risk content feature library, matching the first content features with the first risk content features in the risk content feature library, taking the corresponding first strategy item as a second strategy item when the matching is in accordance, taking the matched first content features as second content features, and simultaneously taking the matched first risk content features as second risk content features;
acquiring a plurality of first associated risk content characteristics corresponding to the second risk content characteristics;
matching third content features except the second content features in the first content features with the first associated risk content features, taking the corresponding first strategy item as a third strategy item when matching is met every time, and taking the matched first associated risk content features as second associated risk content features;
acquiring a preset providing simulation environment corresponding to the second risk content characteristic and the second associated risk content characteristic;
acquiring a first execution scene corresponding to a second strategy item, and acquiring a second execution scene corresponding to a third strategy item;
mapping the first execution scenario and the second execution scenario respectively within a rendering simulation environment;
performing simulated offering within the offered simulated environment based on the second policy item and the third policy item;
in the process of providing simulation, acquiring a preset second risk identification model, and attempting to identify a providing risk generated in a providing simulation environment based on the second risk identification model;
if the identification is successful, acquiring a second risk degree corresponding to the identified risk type providing the risk, and associating the second risk degree with a corresponding third source;
after the first strategy item is randomly selected, summarizing a second risk degree associated with a third source to obtain a second risk degree sum;
if the sum of the second risk degrees is greater than or equal to a preset second risk degree and a threshold value, rejecting a corresponding third source;
when all third sources needing to be removed in the third sources are removed, taking the remaining third sources as fourth sources, and meanwhile, counting the number of the fourth sources;
when the number of the sources is 0, rejecting the corresponding first acquisition path;
otherwise, the fourth source and the corresponding first acquisition path are corresponded again to obtain a second acquisition path;
and acquiring a plurality of correction records for manually correcting the segmentation results of the second images through a second acquisition path.
The working principle and the beneficial effects of the technical scheme are as follows:
under the trend of large data sharing, when training samples required by a training image segmentation result correction model are acquired, the training samples are not necessarily acquired from local, and manual correction records can be acquired from other manufacturers, however, the reliability of the acquisition sources is not guaranteed, so that a plurality of first acquisition paths are set, and the first acquisition paths correspond to a plurality of first sources (for example, corresponding to one manufacturer, and the manufacturer provides the manual correction records to the first acquisition paths for acquisition); acquiring a credit record (authenticity evaluation record of historically provided data and the like) of a first source, performing content analysis on the credit record, and acquiring a credit value; if the credit value is less than or equal to a preset credit threshold value, the first source credit degree is poor and the first source credit degree is rejected; acquiring guarantee rings for eliminating the remaining second sources (for example, guarantee of manufacturer A to manufacturer B, guarantee of manufacturer B to manufacturer C, guarantee of manufacturer C to manufacturer A, A, B and C form guarantee rings), and randomly selecting n (positive integers greater than or equal to 2 and randomly set) second sources to combine into a judgment target set; if the second sources in the corresponding judgment target set are all in a certain second guarantee ring, the guarantee risk is shown, the second sources in the corresponding judgment target set and the second guarantee ring are input into a preset first risk identification model (a model generated after a large number of second sources are manually identified in the second guarantee ring by using a machine learning algorithm and a record of the risk degree is determined based on the position relation and the record is learned), and a first risk degree is obtained; after the second source is randomly selected, summarizing (summing) first risk degrees corresponding to the second source to obtain a first risk degree sum, if the first risk degree sum is larger than or equal to a preset first risk degree sum threshold (constant), indicating that the risk is larger, and rejecting the corresponding second source; acquiring a supply strategy (a strategy for collecting and supplying manual correction records) for eliminating the residual third source; dividing the providing strategy into a plurality of first strategy items, extracting first content features, matching the first content features with first risk content features in a preset risk content feature library (containing a large number of features of the providing strategy suspected to have risks, such as crawling from a webpage), if the first content features match with the first risk content features, acquiring first associated risk content features corresponding to second risk content features which match with the first associated risk content features (when both the features exist, the risk is identified), and matching with the rest of the third content features, if the matching is in accordance with the requirement, acquiring a corresponding preset providing simulation environment (a simulation space), acquiring a first execution scene and a second execution scene (for example, a certain webpage), mapping the data to the simulation environment (for example, configuring a webpage in a simulation space), and performing simulation providing (for example, simulating a crawling process) based on the second strategy item and the third strategy item; in the simulation process, constant identification provides risks based on a preset second risk identification model (a model generated after learning a large number of records of risks existing in the manual capturing simulation process by using a machine learning algorithm), and if identification is successful, a second risk degree is obtained and is associated with a corresponding third source; after the first strategy item is randomly selected, summarizing (summing) second risk degrees associated with a third source to obtain a second risk degree sum; if the sum of the second risk degrees is larger than or equal to the preset second risk degrees and a threshold (constant), indicating that the risk is larger, and rejecting a corresponding third source; counting the number of the sources for rejecting the residual third sources, wherein if the number of the sources is 0, the sources which are not matched with the first path are rejected; otherwise, the fourth source corresponds to the corresponding first acquisition path again to obtain a second acquisition path, and correction records are obtained through the second acquisition path;
when the correction record is acquired, the first acquisition path is set, the source of the first acquisition path is verified, the reliability of acquisition of the correction record is ensured, and the method has higher applicability under the trend of big data sharing; in addition, when the source of the first acquisition path is verified, verification is respectively performed based on the credit record, the guarantee condition and the providing strategy, the setting is reasonable, and the verification is detailed.
The embodiment of the invention provides an image segmentation system based on the integration of consistent manifold approximation and projection clustering, as shown in fig. 5, comprising:
the acquisition module 1 is used for acquiring original image data;
the preprocessing module 2 is used for preprocessing the original image data;
the dimensionality reduction module 3 is used for reducing dimensionality of the preprocessing result based on the consistent manifold approximation and projection;
and the clustering integration module 4 is used for clustering and integrating the dimensionality reduction result to obtain a first image segmentation result and outputting the first image segmentation result.
The working principle and the advantageous effects of the above technical solution have been explained in the method claims and are not described again.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. An image segmentation method based on consistent manifold approximation and projection clustering integration is characterized by comprising the following steps:
step S1: acquiring original image data;
step S2: preprocessing the original image data;
step S3: reducing the dimension of the preprocessing result based on consistent manifold approximation and projection;
step S4: performing clustering integration on the dimensionality reduction result to obtain a first image segmentation result, and outputting the first image segmentation result;
in step S4, before outputting the first image segmentation result, the method further includes:
training an image segmentation result correction model, and correcting a first image segmentation result based on the image segmentation result correction model;
the training image segmentation result correction model comprises the following steps:
acquiring a plurality of correction records for manually correcting a plurality of second image segmentation results;
extracting a first entry from the revised record;
obtaining a logger type of a logger recording the first entry, the logger type including: human and machine;
when the type of a recorder recording the first record item is man-hour, acquiring a first recording time range and a first recording scene of the recorder recording the first record item;
inquiring a preset recording site library, determining a plurality of first recording sites which occur within the first recording time range and correspond to the first recording site, and acquiring the generation time points of the first recording sites;
acquiring a preset field simulation space, and sequentially mapping the first recording field in the field simulation space according to the time sequence based on the generation time point to obtain a dynamic recording field model;
determining a first human body model corresponding to the recording party in the recording scene dynamic model;
identifying, by a behavior recognition technique, a plurality of first behaviors of the first human body model that are generated within the recorded live dynamic model;
acquiring the record type of the first record item, and acquiring a preset behavior value degree judgment library corresponding to the record type;
determining a first price degree of the first behavior based on the behavior value degree determination library;
if the first price degree is larger than or equal to a preset first price degree threshold value, taking the corresponding first behavior as a second behavior;
analyzing the second behavior by an attention analysis technique, analyzing at least one second mannequin that the first mannequin focused on within the recorded live dynamic model and a first concentration of the second mannequin;
acquiring a first identity of the second human body model, and acquiring a person recording the first record item;
obtaining a second identity of the recording person;
pairing the first identity and the second identity one by one, and if the first identity and the second identity cannot be successfully paired one by one, rejecting the corresponding first record item;
otherwise, counting the number of the second human body model;
when the number of the models is 1, if the first concentration is less than or equal to a preset first concentration threshold value, rejecting the corresponding first record item;
when the model data is larger than 1, acquiring the contribution degree of the person in record corresponding to the first record item;
acquiring a second concentration threshold corresponding to the contribution degree;
determining the first concentration corresponding to the recording persons based on the first identity and the second identity, and taking the first concentration as a second concentration;
if the second concentration is smaller than the corresponding second concentration threshold, rejecting the corresponding first record item;
when the type of a recorder recording the first recording item is a machine, acquiring a second recording time range and a second recording scene of the recorder recording the first recording item;
querying a preset credibility dynamic evaluation library, and determining a plurality of dynamic credibility of the second recording scene within the second recording time range;
if the dynamic credibility is less than or equal to a preset credibility threshold value, rejecting the corresponding first record item;
when the first record items needing to be removed from the corrected record are all removed, removing the remaining first record items from the corrected record to serve as second record items;
integrating the second record items to obtain records to be evaluated;
acquiring a preset record value degree evaluation model, and evaluating the value degree of the record to be evaluated based on the record value degree evaluation model to obtain a second value degree;
if the second valence is greater than or equal to a preset second valence threshold, rejecting the corresponding correction record, and taking the corresponding record to be evaluated as a first training sample;
otherwise, only removing the corresponding correction record;
when all the correction records needing to be removed are removed, the remaining correction records are removed to serve as a second training sample;
and acquiring a preset neural network training model, inputting the first training sample and the second training sample into the neural network training model for model training, and acquiring an image segmentation result correction model.
2. The image segmentation method based on the integration of the conformal approximation and the projection clustering as claimed in claim 1, wherein the step S2: preprocessing the raw image data, including:
step S201: filtering the original image data;
step S202: denoising the filtering result;
step S203: carrying out gray level correction on the denoising result;
step S204: and converting the gray correction result into a digital form to obtain a preprocessing result.
3. The image segmentation method based on the unified manifold approximation and projection clustering as claimed in claim 1, wherein the step S4 of clustering and integrating the dimensionality reduction result to obtain the first image segmentation result comprises:
step S401: clustering the dimensionality reduction result by using a K-means algorithm to obtain
Figure DEST_PATH_IMAGE001
A cluster member;
step S402: from the above
Figure 72064DEST_PATH_IMAGE001
Selecting a preset number of cluster members from the cluster members;
step S403: and carrying out clustering integration on the selected clustering members to obtain a first image segmentation result.
4. The image segmentation method based on the integration of the conformal approximation and the projection clustering as claimed in claim 1, wherein obtaining a plurality of modification records for manually modifying the second image segmentation results comprises:
acquiring a preset acquisition path set, wherein the acquisition path set comprises: a plurality of first acquisition paths;
acquiring a plurality of first sources corresponding to the first acquisition path;
acquiring a credit record corresponding to the first source;
analyzing the content of the credit record to obtain a credit value;
if the credit value is less than or equal to a preset credit value threshold value, rejecting the corresponding first source;
when all the first sources needing to be removed in the first sources are removed, taking the remaining first sources after removal as second sources;
acquiring a first guarantee circle corresponding to the second source;
randomly selecting n second sources and combining the second sources into a judgment target set;
judging whether the second sources in the judgment target set are all in any first guarantee ring, if so, taking the corresponding first guarantee ring as a second guarantee ring;
acquiring a preset first risk identification model, inputting the second source and the second guarantee ring in the judgment target set into the first risk identification model, and acquiring a first risk degree corresponding to the second source;
when the second source is selected randomly, summarizing a first risk degree corresponding to the second source to obtain a first risk degree sum;
if the first risk degree sum is larger than or equal to a preset first risk degree sum threshold value, rejecting the corresponding second source;
when the second sources needing to be removed in the second sources are all removed, taking the remaining second sources after removal as third sources;
obtaining a providing strategy corresponding to the third source;
carrying out strategy splitting on the providing strategy to obtain a plurality of first strategy items;
randomly selecting the first strategy item, and extracting content features to obtain a plurality of first content features;
acquiring a preset risk content feature library, matching the first content features with first risk content features in the risk content feature library, taking the corresponding first strategy item as a second strategy item when matching is consistent, taking the matched first content features as second content features, and taking the matched first risk content features as second risk content features;
acquiring a plurality of first associated risk content characteristics corresponding to the second risk content characteristics;
matching third content features except the second content features in the first content features with the first associated risk content features, taking the corresponding first strategy item as a third strategy item when matching is met every time, and taking the matched first associated risk content features as second associated risk content features;
acquiring a preset providing simulation environment corresponding to the second risk content characteristic and the second associated risk content characteristic;
acquiring a first execution scene corresponding to the second strategy item, and acquiring a second execution scene corresponding to the third strategy item;
mapping the first execution scenario and the second execution scenario, respectively, within the providing simulation environment;
performing simulated offering within the offered simulated environment based on the second policy item and the third policy item;
in the process of providing simulation, acquiring a preset second risk identification model, and attempting to identify a providing risk generated in the providing simulation environment based on the second risk identification model;
if the identification is successful, acquiring a second risk degree corresponding to the identified risk type providing the risk, and associating the second risk degree with the third source;
when the first strategy item is selected randomly, summarizing the second risk degree associated with the third source to obtain a second risk degree sum;
if the sum of the second risk degrees is greater than or equal to a preset second risk degree and a threshold value, rejecting the corresponding third source;
when all the third sources needing to be removed in the third sources are removed, taking the remaining third sources after being removed as fourth sources, and meanwhile, counting the number of the fourth sources;
when the number of the sources is 0, rejecting the corresponding first acquisition path;
otherwise, the fourth source and the corresponding first acquisition path are corresponded again to obtain a second acquisition path;
and acquiring a plurality of correction records for manually correcting the segmentation results of the plurality of second images through the second acquisition path.
5. An image segmentation system based on the integration of conformal approximation and projection clustering, comprising:
the acquisition module is used for acquiring original image data;
the preprocessing module is used for preprocessing the original image data;
the dimensionality reduction module is used for reducing dimensionality of the preprocessing result based on the consistent manifold approximation and projection;
the clustering integration module is used for clustering and integrating the dimensionality reduction result to obtain a first image segmentation result and outputting the first image segmentation result;
the clustering integration module performs the following operations:
training an image segmentation result correction model, and correcting a first image segmentation result based on the image segmentation result correction model;
the training image segmentation result correction model comprises the following steps:
acquiring a plurality of correction records for manually correcting a plurality of second image segmentation results;
extracting a first entry from the revised record;
obtaining a logger type of a logger recording the first entry, the logger type including: human and machine;
when the type of a recorder recording the first record item is man-hour, acquiring a first recording time range and a first recording scene of the recorder recording the first record item;
querying a preset recording scene library, determining a plurality of first recording scenes which occur within the first recording time range and correspond to the first recording scene, and simultaneously acquiring the generation time points of the first recording scenes;
acquiring a preset field simulation space, and sequentially mapping the first recording field in the field simulation space according to the time sequence based on the generation time point to obtain a dynamic recording field model;
determining a first human body model corresponding to the recording party in the recording scene dynamic model;
identifying, by a behavior recognition technique, a plurality of first behaviors of the first human body model that are generated within the recorded live dynamic model;
acquiring the record type of the first record item, and acquiring a preset behavior value degree judgment library corresponding to the record type;
determining a first price degree of the first behavior based on the behavior value degree determination library;
if the first price degree is larger than or equal to a preset first price degree threshold value, taking the corresponding first behavior as a second behavior;
analyzing the second behavior by an attention analysis technique, analyzing at least one second mannequin that the first mannequin focused on within the recorded live dynamic model and a first concentration of the second mannequin;
acquiring a first identity of the second human body model, and acquiring a person recording the first record item;
obtaining a second identity of the recording person;
pairing the first identity and the second identity one by one, and if the first identity and the second identity cannot be successfully paired one by one, rejecting the corresponding first record item;
otherwise, counting the number of the second human body model;
when the number of the models is 1, if the first concentration is less than or equal to a preset first concentration threshold value, rejecting the corresponding first record item;
when the model data is larger than 1, acquiring the contribution degree of the person in record corresponding to the first record item;
acquiring a second concentration threshold corresponding to the contribution degree;
determining the first concentration corresponding to the record persons based on the first identity and the second identity, and using the first concentration as a second concentration;
if the second concentration is smaller than the corresponding second concentration threshold, rejecting the corresponding first record item;
when the type of a recorder recording the first recording item is a machine, acquiring a second recording time range and a second recording scene of the recorder recording the first recording item;
querying a preset credibility dynamic evaluation library, and determining a plurality of dynamic credibility of the second recording scene within the second recording time range;
if the dynamic credibility is less than or equal to a preset credibility threshold value, rejecting the corresponding first record item;
when the first record items needing to be removed from the corrected record are all removed, removing the remaining first record items from the corrected record to serve as second record items;
integrating the second record items to obtain records to be evaluated;
acquiring a preset record value degree evaluation model, and evaluating the value degree of the record to be evaluated based on the record value degree evaluation model to obtain a second value degree;
if the second valence is greater than or equal to a preset second valence threshold, rejecting the corresponding correction record, and taking the corresponding record to be evaluated as a first training sample;
otherwise, only removing the corresponding correction record;
when all the correction records needing to be removed are removed, the remaining correction records are removed to serve as a second training sample;
and acquiring a preset neural network training model, inputting the first training sample and the second training sample into the neural network training model for model training, and acquiring an image segmentation result correction model.
6. The image segmentation system based on the integration of the conformal approximation approach and the projection clustering as claimed in claim 5, wherein the preprocessing module performs the following operations:
filtering the original image data;
denoising the filtering result;
carrying out gray level correction on the denoising result;
and converting the gray correction result into a digital form to obtain a preprocessing result.
7. The image segmentation system based on the integration of the consistent manifold approximation and the projection clustering as claimed in claim 5, wherein the clustering integration module performs the following operations:
clustering the dimensionality reduction result by using a K-means algorithm to obtain
Figure 209784DEST_PATH_IMAGE001
A cluster member;
from the above
Figure 468727DEST_PATH_IMAGE001
Selecting a preset number of cluster members from the cluster members;
and carrying out clustering integration on the selected clustering members to obtain a first image segmentation result.
8. An image segmentation system based on the integration of consistent manifold approximation and projection clustering as claimed in claim 5, wherein said clustering module performs the following operations:
acquiring a preset acquisition path set, wherein the acquisition path set comprises: a plurality of first acquisition paths;
acquiring a plurality of first sources corresponding to the first acquisition path;
acquiring a credit record corresponding to the first source;
analyzing the content of the credit record to obtain a credit value;
if the credit value is less than or equal to a preset credit value threshold value, rejecting the corresponding first source;
when all the first sources needing to be removed in the first sources are removed, taking the remaining first sources after removal as second sources;
acquiring a first guarantee circle corresponding to the second source;
randomly selecting n second sources and combining the second sources into a judgment target set;
judging whether the second sources in the judgment target set are all in any first guarantee ring, if so, taking the corresponding first guarantee ring as a second guarantee ring;
acquiring a preset first risk identification model, inputting the second source and the second guarantee ring in the judgment target set into the first risk identification model, and acquiring a first risk degree corresponding to the second source;
when the second source is selected randomly, summarizing a first risk degree corresponding to the second source to obtain a first risk degree sum;
if the first risk degree sum is larger than or equal to a preset first risk degree sum threshold value, rejecting the corresponding second source;
when all the second sources needing to be removed in the second sources are removed, taking the remaining second sources after removal as third sources;
obtaining a providing strategy corresponding to the third source;
carrying out strategy splitting on the providing strategy to obtain a plurality of first strategy items;
randomly selecting the first strategy items, and extracting content features to obtain a plurality of first content features;
acquiring a preset risk content feature library, matching the first content features with first risk content features in the risk content feature library, taking the corresponding first strategy item as a second strategy item when matching is consistent, taking the matched first content features as second content features, and taking the matched first risk content features as second risk content features;
acquiring a plurality of first associated risk content characteristics corresponding to the second risk content characteristics;
matching third content features except the second content features in the first content features with the first associated risk content features, taking the corresponding first strategy item as a third strategy item when matching is met every time, and taking the matched first associated risk content features as second associated risk content features;
acquiring a preset providing simulation environment corresponding to the second risk content characteristic and the second associated risk content characteristic;
acquiring a first execution scene corresponding to the second strategy item, and acquiring a second execution scene corresponding to the third strategy item;
mapping the first execution scenario and the second execution scenario, respectively, within the providing simulation environment;
performing a simulated offering within the offering simulated environment based on the second and third policy terms;
in the process of providing simulation, acquiring a preset second risk identification model, and attempting to identify a providing risk generated in the providing simulation environment based on the second risk identification model;
if the identification is successful, acquiring a second risk degree corresponding to the identified risk type providing the risk, and associating the second risk degree with the third source;
when the first strategy item is selected randomly, summarizing the second risk degree associated with the third source to obtain a second risk degree sum;
if the sum of the second risk degrees is greater than or equal to a preset second risk degree and a threshold value, rejecting the corresponding third source;
when all the third sources needing to be removed in the third sources are removed, taking the remaining third sources after being removed as fourth sources, and meanwhile, counting the number of the fourth sources;
when the number of the sources is 0, rejecting the corresponding first acquisition path;
otherwise, the fourth source and the corresponding first acquisition path are corresponded again to obtain a second acquisition path;
and acquiring a plurality of correction records for manually correcting the segmentation results of the plurality of second images through the second acquisition path.
CN202111540358.XA 2021-12-16 2021-12-16 Image segmentation method and system based on consistent manifold approximation and projection clustering integration Active CN114266298B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111540358.XA CN114266298B (en) 2021-12-16 2021-12-16 Image segmentation method and system based on consistent manifold approximation and projection clustering integration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111540358.XA CN114266298B (en) 2021-12-16 2021-12-16 Image segmentation method and system based on consistent manifold approximation and projection clustering integration

Publications (2)

Publication Number Publication Date
CN114266298A CN114266298A (en) 2022-04-01
CN114266298B true CN114266298B (en) 2022-07-08

Family

ID=80827455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111540358.XA Active CN114266298B (en) 2021-12-16 2021-12-16 Image segmentation method and system based on consistent manifold approximation and projection clustering integration

Country Status (1)

Country Link
CN (1) CN114266298B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019018693A2 (en) * 2017-07-19 2019-01-24 Altius Institute For Biomedical Sciences Methods of analyzing microscopy images using machine learning
CN111161301A (en) * 2019-12-31 2020-05-15 上海商汤智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111553417A (en) * 2020-04-28 2020-08-18 厦门大学 Image data dimension reduction method and system based on discriminant regularization local preserving projection
CN113643275A (en) * 2021-08-29 2021-11-12 浙江工业大学 Ultrasonic defect detection method based on unsupervised manifold segmentation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210004962A1 (en) * 2019-07-02 2021-01-07 Qualcomm Incorporated Generating effects on images using disparity guided salient object detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019018693A2 (en) * 2017-07-19 2019-01-24 Altius Institute For Biomedical Sciences Methods of analyzing microscopy images using machine learning
CN111161301A (en) * 2019-12-31 2020-05-15 上海商汤智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111553417A (en) * 2020-04-28 2020-08-18 厦门大学 Image data dimension reduction method and system based on discriminant regularization local preserving projection
CN113643275A (en) * 2021-08-29 2021-11-12 浙江工业大学 Ultrasonic defect detection method based on unsupervised manifold segmentation

Also Published As

Publication number Publication date
CN114266298A (en) 2022-04-01

Similar Documents

Publication Publication Date Title
Yang et al. MTD-Net: Learning to detect deepfakes images by multi-scale texture difference
US10943346B2 (en) Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning
CN111310731B (en) Video recommendation method, device, equipment and storage medium based on artificial intelligence
CN112801057B (en) Image processing method, image processing device, computer equipment and storage medium
CN112766244A (en) Target object detection method and device, computer equipment and storage medium
CN112818862A (en) Face tampering detection method and system based on multi-source clues and mixed attention
CN113762138B (en) Identification method, device, computer equipment and storage medium for fake face pictures
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN114758362B (en) Clothing changing pedestrian re-identification method based on semantic perception attention and visual shielding
CN112330624A (en) Medical image processing method and device
CN111127400A (en) Method and device for detecting breast lesions
CN113902613A (en) Image style migration system and method based on three-branch clustering semantic segmentation
CN110910497B (en) Method and system for realizing augmented reality map
CN114266298B (en) Image segmentation method and system based on consistent manifold approximation and projection clustering integration
CN113436735A (en) Body weight index prediction method, device and storage medium based on face structure measurement
CN110889418A (en) Gas contour identification method
CN116958724A (en) Training method and related device for product classification model
CN115719414A (en) Target detection and accurate positioning method based on arbitrary quadrilateral regression
CN111459050B (en) Intelligent simulation type nursing teaching system and teaching method based on dual-network interconnection
CN113822846A (en) Method, apparatus, device and medium for determining region of interest in medical image
CN111598144A (en) Training method and device of image recognition model
WO2020237185A1 (en) Systems and methods to train a cell object detector
CN111680722B (en) Content identification method, device, equipment and readable storage medium
CN116704208B (en) Local interpretable method based on characteristic relation
Luo et al. Research on Face Local Attribute Detection Method Based on Improved SSD Network Structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wang Jiangfeng

Inventor after: Chen Sibo

Inventor after: Cai Na

Inventor after: Xu Sen

Inventor after: Xu Xiufang

Inventor after: Hua Xiaopeng

Inventor after: Gao Jun

Inventor after: An Jing

Inventor after: Ji Hongwei

Inventor after: Jiang Chenyu

Inventor after: Lu Xiangwen

Inventor before: Xu Sen

Inventor before: Chen Sibo

Inventor before: Cai Na

Inventor before: Wang Jiangfeng

Inventor before: Xu Xiufang

Inventor before: Hua Xiaopeng

Inventor before: Gao Jun

Inventor before: An Jing

Inventor before: Ji Hongwei

Inventor before: Jiang Chenyu

Inventor before: Lu Xiangwen

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant