CN115131296B - Distributed computing method and system for image recognition - Google Patents

Distributed computing method and system for image recognition Download PDF

Info

Publication number
CN115131296B
CN115131296B CN202210645370.5A CN202210645370A CN115131296B CN 115131296 B CN115131296 B CN 115131296B CN 202210645370 A CN202210645370 A CN 202210645370A CN 115131296 B CN115131296 B CN 115131296B
Authority
CN
China
Prior art keywords
image
region
gray
entropy
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210645370.5A
Other languages
Chinese (zh)
Other versions
CN115131296A (en
Inventor
刘志钢
刘石岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Dongchao Intelligent Technology Co ltd
Original Assignee
Guangzhou Dongchao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Dongchao Intelligent Technology Co ltd filed Critical Guangzhou Dongchao Intelligent Technology Co ltd
Priority to CN202210645370.5A priority Critical patent/CN115131296B/en
Publication of CN115131296A publication Critical patent/CN115131296A/en
Application granted granted Critical
Publication of CN115131296B publication Critical patent/CN115131296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Abstract

The invention discloses a distributed computing method and system for image recognition, and relates to the field of artificial intelligence. Comprising the following steps: s1: obtaining an area image in a gray level image of an image to be identified; s2: acquiring an adjacent matrix of the gray level image; s3: acquiring a change consistency adjacency matrix: calculating the regional uniformity among regional images to obtain a sampling adjacent matrix, and performing dot multiplication calculation on the adjacent matrix of the gray image and the sampling adjacent matrix to obtain a variation consistency adjacent matrix; s4: performing distributed computation on the image data: setting a category threshold, classifying the regional image according to the category threshold and the data in the change consistency adjacent matrix, and performing distributed computation on the classified image data. According to the invention, the data with different relevance is subjected to block calculation, so that the calculation efficiency of image recognition is improved, and meanwhile, the relevance between different data is also used as a characteristic to participate in the image recognition process, so that the recognition accuracy is improved.

Description

Distributed computing method and system for image recognition
Technical Field
The application relates to the field of artificial intelligence, in particular to a distributed computing method and system for image recognition.
Background
With the development of computing technology, some applications require very great computing power to complete, such as with centralized computing, which takes a considerable amount of time to complete. Distributed computing breaks the application down into many small parts that are distributed to multiple computers for processing. Therefore, the overall calculation time can be saved, and the calculation efficiency is greatly improved. In the process of identifying the image, the processing of the image data adopts a distributed computing method, so that the computing efficiency can be improved to a greater extent, and the efficiency and the accuracy of image identification are improved.
The existing image recognition through distributed computation often distributes computation force through the obvious degree of different features on the image, but the unobvious features are not necessarily the features required by the recognition image, so that a large amount of computation is likely to be spent, but the computation result has no great meaning and effect on the result of the image recognition, so that the resource waste and the phenomenon of reducing the computation efficiency are caused, and meanwhile, the accuracy of the recognition result is affected to a certain extent.
Disclosure of Invention
Aiming at the technical problems, the invention provides a distributed computing method and a distributed computing system for image recognition.
In a first aspect, an embodiment of the present invention provides a distributed computing method for image recognition, including:
s1: obtaining an area image in a gray level image of an image to be identified:
graying the image to be identified to obtain a gray image;
obtaining all area images in the gray image by using gray values of all pixel points in the gray image and a seed growth method;
s2: acquiring an adjacency matrix of the gray level image:
calculating Euclidean distance between different area images according to gray entropy and texture entropy of each area image in the gray image;
calculating the region similarity between the region images according to the distance between the region images in the gray level image, and obtaining an adjacent matrix by using the region similarity between the image regions;
s3: acquiring a change consistency adjacency matrix:
carrying out Gaussian pyramid sampling on each area image in the gray level image to obtain a sampled image of each area image after each sampling;
respectively calculating the difference value of the gray entropy and the texture entropy of each region image after each sampling and the region image sampled before, and calculating a change rate binary set of each region image after each sampling according to the difference value of the gray entropy and the texture entropy of each region image after each sampling and the repetition rate of the region image after each sampling, wherein the change rate binary set comprises the change rate of the gray entropy and the change rate of the texture entropy;
counting the change rate binary groups of each image area obtained after each sampling and the previous sampling to form a change sequence of each area image, and calculating the similarity between every two area images according to the change sequence of each area image;
obtaining a sampling adjacent matrix according to the regional uniformity among regional images, and performing dot multiplication calculation on the adjacent matrix of the gray level images and the sampling adjacent matrix to obtain a variation consistency adjacent matrix;
s4: performing distributed computation on the image data:
and setting a category threshold, classifying the regional image according to the category threshold and the data in the change consistency adjacent matrix, and recognizing the image to be recognized by adopting distributed calculation according to the classified image data.
The process of obtaining the region image in the gray image by using the gray value of each pixel point in the gray image and the seed growth method is as follows:
s1-1: counting the gray value of the gray image to obtain a gray histogram, and performing Gaussian smoothing on the gray histogram to obtain a smoothed gray histogram;
s1-2: performing difference on the original gray level histogram of the image and the smoothed gray level histogram to obtain a difference value histogram, and obtaining pixel points corresponding to gray level values with the frequency of the pixel points larger than a quantity threshold value in the difference value histogram as seed points of the image;
s1-3: and dividing the image by using the seed point as a category center through a region growing method to obtain a divided region image.
The method for calculating the similarity of the regions by calculating the distance between the regions is as follows:
the Euclidean distance between every two areas is calculated, all the distances are normalized by using the distance maximum value of each area and other areas to obtain the area similarity, and the specific calculation formula is as follows:
wherein:indicate->Personal area and->Regional similarity of individual regions->Indicate->Maximum value of the distance of the individual region from the other region, < >>Indicate->Personal area and->Distance of individual area>、/>For the sequence number of the segmented region image,
the Euclidean distance between every two areas is calculated by the following steps:
calculating the gray entropy and the texture entropy of each area image, expressing the gray entropy and the texture entropy by using a binary group as the coordinates of the central point of the area image, and calculating the Euclidean distance between the area images according to the coordinates of the central point of the area image;
the gray entropy is the entropy of the gray values of all pixel points in the region image, and the texture entropy is the entropy of the data in the gray co-occurrence matrix of the region image.
The method of calculating the rate of change doublet for each region is as follows:
and (3) respectively multiplying the difference value between the gray entropy and the texture entropy of each region image sampled and sampled at the previous time and the region repetition rate sampled and sampled at the previous time, and calculating the change rate of the gray entropy and the change rate of the texture entropy after the current sampling to obtain a binary vector as a change rate binary group, wherein the specific calculation formula is as follows:
wherein:for the rate of change of gray entropy +.>For the rate of change of texture entropy, +.>For the region repetition rate after sampling and before sampling, +.>Is the difference between the gray entropy of the sample and the previous sample, +.>The difference value of texture entropy between the sampled texture entropy and the previous texture entropy;
the region repetition rate is the ratio of the maximum overlapping area of the current sampling and the previous sampling of the region image to the region image area of the previous sampling.
The method for calculating the similarity between the two area images comprises the following steps: and matching the change rate sequences of the two region images by using a DTW algorithm, calculating the DTW distance between the two region images, and taking the DTW distance as the region uniformity between the two region images.
The method for classifying the regional images according to the data in the category threshold and the change consistency adjacency matrix is as follows:
setting a category threshold value, dividing the region images corresponding to the data which are larger than or equal to the category threshold value in the change consistency adjacent matrix into one category, and dividing the region images corresponding to the data which are smaller than the category threshold value into one category, so as to realize the classification of the region images.
In a second aspect, an embodiment of the present invention provides a distributed computing system for image recognition, comprising:
an image processing module: carrying out gray conversion on the image to be identified, and carrying out region segmentation on the gray histogram of the image to be identified to obtain a segmented region image;
and a matrix construction module: carrying out Gaussian pyramid sampling on each area image in the gray level image to obtain a sampled image of each area image after each sampling; obtaining a change sequence of each region image according to a change rate binary group of each sampling of each image region, and calculating the uniformity between every two region images according to the change sequence of the region images; obtaining a sampling adjacent matrix according to the regional uniformity among regional images, and obtaining a variation consistency adjacent matrix by utilizing the adjacent matrix of the gray level images and the sampling adjacent matrix;
and a characteristic association module: and setting a category threshold, classifying the regional image according to the category threshold and the data in the change consistency adjacent matrix, and classifying the regional image features with strong relevance into one type for recognition by adopting distributed computation according to the classified image data.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: the patent provides a distributed computing method and system based on image recognition, which improves the computing efficiency of the image recognition by carrying out block computation on data with different relativity, takes the relativity among different data as a characteristic to participate in the image recognition process, and improves the recognition accuracy.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of a method for a distributed computing method for image recognition according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an adjacency matrix of a distributed computing method for image recognition according to an embodiment of the present invention;
fig. 3 is an illustration of a variation consistency adjacency matrix for a distributed computing method for image recognition according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second" may include one or more such features, either explicitly or implicitly; in the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
Example 1
An embodiment of the present invention provides a distributed computing method for image recognition, as shown in fig. 1, including:
s101, acquiring area images of gray level images
The method comprises the steps of obtaining a gray level image of an RGB image to be processed, determining seed points of the gray level image according to gray level values of the gray level image, carrying out region division on the obtained gray level image by using seed growing hairs to obtain a region image in the gray level image, classifying image data according to analysis of the region image, and realizing distributed calculation.
S102, calculating Euclidean distance between images of different areas
And calculating the Euclidean distance between the area images according to the center point coordinates of the area images and the gray level images according to the gray level entropy and the texture entropy of each area image in the gray level images as the center point coordinates of the area images.
S103, obtaining an adjacency matrix between the regional images
The region similarity between the region images is calculated from the euclidean distance between the region images calculated from the gray entropy and the texture entropy, which are calculated from the gray values of the images, so that the similarity between the region images can be reflected.
And constructing an adjacency matrix of the gray level images according to the obtained similarity between the area images.
S104, respectively sampling the regional images
And (3) sampling the region images in the gray level images, and counting the change results of the region images after each sampling to obtain the change information of each region image, so that the region images are more accurately classified according to the information of each region.
S105, calculating the similarity between the two area images
And calculating the change sequence of the two groups of each region, namely the change sequence consisting of the change rate of the gray entropy and the texture entropy, through the change calculation of different regions in the sampling process, and calculating the uniformity between every two regions.
S106, obtaining a change consistency adjacency matrix
And constructing a sampling adjacent matrix according to the uniformity between every two areas, and obtaining a variation consistency adjacent matrix according to the adjacent matrix of the gray image and the sampling adjacent matrix.
S107, classifying the regional images to realize distributed computation
And dividing the data of the change consistency adjacent matrix according to the set category threshold value, and dividing the categories of the regional images according to the division result to realize distributed calculation.
Example 2
The embodiment of the invention provides a distributed computing method for image recognition, as shown in fig. 1, and the specific content comprises:
s201, obtaining region image of gray level image
The method comprises the steps of obtaining a gray level image of an RGB image to be processed, determining seed points of the gray level image according to gray level values of the gray level image, carrying out region division on the obtained gray level image by using seed growing hairs to obtain a region image in the gray level image, classifying image data according to analysis of the region image, and realizing distributed calculation.
1. Acquiring gray scale images
Gray processing is carried out on the RGB image to be processed to obtain a gray image
2. Acquiring region images in gray scale images
1) Statistics of image gray values
2) Acquiring image seed points
Performing Gaussian smoothing on the image histogram to obtain a smoothed image histogram, performing difference on the smoothed image histogram and the histogram before smoothing to obtain a difference image, selecting pixel points, corresponding to pixel values of 0-255, in the difference image, of which the number is larger than a category threshold k as peaks of the image histogram (the histogram before smoothing), and taking the pixel values, corresponding to the peaks of the image histogram, as seed points in the image.
3) Region segmentation
And carrying out region segmentation on the image by a seed growth method according to the positions of the seed points to obtain a region image after the image segmentation.
Thus, a gray image and a region image of the sampling expansion image are obtained.
S202, calculating Euclidean distance between images of different areas
And calculating the Euclidean distance between the area images according to the center point coordinates of the area images and the gray level images according to the gray level entropy and the texture entropy of each area image in the gray level images as the center point coordinates of the area images.
Calculating the gray entropy and the texture entropy of each area image, expressing the gray entropy and the texture entropy by using a binary group as the coordinates of the central point of the area image, namely (gray entropy and texture entropy), and calculating the Euclidean distance between the area images according to the coordinates of the central point of the area image;
the gray entropy is the entropy of the gray values of all the pixel points in the region image, the gray level co-occurrence matrix of the region image is constructed according to the gray values of all the pixel points in the region image, and the texture entropy is the entropy of the data in the gray level co-occurrence matrix of the region image.
S203, obtaining an adjacency matrix between the regional images
The region similarity between the region images is calculated from the euclidean distance between the region images calculated from the gray entropy and the texture entropy, which are calculated from the gray value of the image, so that the similarity between the region images can be reflected.
And constructing an adjacency matrix of the gray level images according to the obtained similarity between the area images.
1. Calculating region similarity between region images
And normalizing all the distances by using the maximum value of the distances of each region and other regions to obtain the region similarity, wherein the specific calculation formula is as follows:
wherein:indicate->Personal area and->Regional similarity of individual regions->Indicate->Maximum value of the distance of the individual region from the other region, < >>Indicate->Personal area and->Distance of individual area>、/>For the sequence number of the segmented region image,
2. constructing an adjacency matrix between region images
And obtaining the adjacent matrix after calculating the similarity between different areas. As shown in fig. 2, the adjacency matrix 1234 represents the different regions,representation->Area and->Regional similarity of regions.
The adjacency matrix represents the similarity of information of different areas.
S204, respectively sampling the regional images
And (3) sampling the region images in the gray level images, and counting the change results of the region images after each sampling to obtain the change information of each region image, so that the region images are more accurately classified according to the information of each region.
And respectively carrying out Gaussian pyramid downsampling on each region image to obtain sampling images under different scales.
Setting sampling parameters: mean sampling, 3*3 window.
Pyramid sampling stop condition: and calculating to obtain the minimum overlapping rate of the matching area after each sampling, and stopping sampling at the moment when the minimum overlapping rate of the k-th sampling is smaller than 0.8, which indicates that the error between the sampled area image and the original area image is larger.
At this time, a sampling image of each region image is obtained after a plurality of pyramid samplings are performed on each region image.
S205, calculating the similarity between the two area images
And calculating the change sequence of the two groups of each region, namely the change rate sequence consisting of the change rates of the gray entropy and the texture entropy, through the change calculation of different regions in the sampling process, and calculating the uniformity between every two regions.
1. Acquiring a sequence of rates of change of images of different regions
And (3) respectively multiplying the difference value between the gray entropy and the texture entropy of each region image sampled and sampled at the previous time and the region repetition rate sampled and sampled at the previous time, and calculating the change rate of the gray entropy and the change rate of the texture entropy after the current sampling to obtain a binary vector as a change rate binary group, wherein the specific calculation formula is as follows:
wherein:for the rate of change of gray entropy +.>For the rate of change of texture entropy, +.>For the region repetition rate after sampling and before sampling, +.>Is the difference between the gray entropy of the sample and the previous sample, +.>The difference value of texture entropy between the sampled texture entropy and the previous texture entropy;
the region repetition rate is the ratio of the maximum overlapping area of the current sampling and the previous sampling of the region image to the region image area of the previous sampling.
2. Calculating the similarity between two-by-two region images
And matching the change rate sequences of the two region images by using a DTW algorithm, calculating the DTW distance between the two region images, and taking the DTW distance as the region uniformity between the two region images.
S206, obtaining a change consistency adjacency matrix
And constructing a sampling adjacent matrix according to the uniformity between every two areas, and obtaining a variation consistency adjacent matrix according to the adjacent matrix of the gray image and the sampling adjacent matrix.
1. Constructing a sampling adjacency matrix
And constructing a sampling adjacency matrix by the same method as that for constructing the adjacency matrix of the gray level image through the region uniformity between the two regions obtained through calculation.
2. Obtained variation consistency adjacency matrix
And obtaining a variation consistency adjacent matrix by carrying out dot multiplication calculation on the adjacent matrix of the gray level image and the obtained sampling adjacent matrix.
S207, classifying area images to realize distributed computation
Dividing the data of the change consistency adjacent matrix according to the set category threshold, classifying the regional images according to the division result, and carrying out distributed calculation on the images to be identified according to the classified categories.
Setting a category threshold value, dividing the region images corresponding to the data which are larger than or equal to the category threshold value in the change consistency adjacent matrix into one category, and dividing the region images corresponding to the data which are smaller than the category threshold value into one category, so as to realize the classification of the region images.
For example: as shown in FIG. 3, the variational consistency adjacency matrix is a symmetric matrix in terms of its composition, so that only half of the matrix looks like, and the class threshold is set to 0.8, because、/>、/>If the uniformity of the region (1) is greater than or equal to the class threshold value of 0.8, the regions (1, 2, 3) are divided into the same class, and the region (4) is a class according to the divided classThe distributed computation is performed on the whole image.
Based on the same inventive concept as the above method, the present embodiment further provides a distributed computing system for image recognition, where the distributed computing system for image recognition includes: the system comprises an image processing module, a matrix construction module and a characteristic association module, wherein the image processing module, the matrix construction module and the characteristic association module are used for realizing gray conversion of an image to be identified as described in an embodiment of a distributed computing method for image identification, and the segmented area image is obtained by carrying out area segmentation on a gray histogram of the image to be identified; carrying out Gaussian pyramid sampling on each area image in the gray level image to obtain a sampled image of each area image after each sampling; obtaining a change sequence of each region image according to a change rate binary group of each sampling of each image region, and calculating the uniformity between every two region images according to the change sequence of the region images; obtaining a sampling adjacent matrix according to the regional uniformity among regional images, and obtaining a variation consistency adjacent matrix by utilizing the adjacent matrix of the gray level images and the sampling adjacent matrix; and setting a category threshold, classifying the regional image according to the category threshold and the data in the change consistency adjacent matrix, and classifying the regional image features with strong relevance into one type for recognition by adopting distributed computation according to the classified image data.
Because the gray conversion is carried out on the image to be identified in the embodiment of the distributed computing method for image identification, the region segmentation is carried out through the gray histogram of the image to be identified, and a segmented region image is obtained; carrying out Gaussian pyramid sampling on each area image in the gray level image to obtain a sampled image of each area image after each sampling; obtaining a change sequence of each region image according to a change rate binary group of each sampling of each image region, and calculating the uniformity between every two region images according to the change sequence of the region images; obtaining a sampling adjacent matrix according to the regional uniformity among regional images, and obtaining a variation consistency adjacent matrix by utilizing the adjacent matrix of the gray level images and the sampling adjacent matrix; the method for classifying the image region features with strong relevance into one type by adopting distributed computation according to the classified image data is described, and the description is omitted here.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

1. A distributed computing method for image recognition, comprising:
s1: obtaining an area image in a gray level image of an image to be identified:
graying the image to be identified to obtain a gray image;
obtaining all area images in the gray image by using gray values of all pixel points in the gray image and a seed growth method;
s2: acquiring an adjacency matrix of the gray level image:
calculating Euclidean distance between different area images according to gray entropy and texture entropy of each area image in the gray image;
calculating the region similarity between the region images according to the distance between the region images in the gray level image, and obtaining an adjacent matrix by using the region similarity between the image regions;
s3: acquiring a change consistency adjacency matrix:
carrying out Gaussian pyramid sampling on each area image in the gray level image to obtain a sampled image of each area image after each sampling;
respectively calculating the difference value of the gray entropy and the texture entropy of each region image after each sampling and the region image sampled before, and calculating a change rate binary set of each region image after each sampling according to the difference value of the gray entropy and the texture entropy of each region image after each sampling and the repetition rate of the region image after each sampling, wherein the change rate binary set comprises the change rate of the gray entropy and the change rate of the texture entropy;
counting the change rate binary groups of each image area obtained after each sampling and the previous sampling to form a change sequence of each area image, and calculating the similarity between every two area images according to the change sequence of each area image;
obtaining a sampling adjacent matrix according to the regional uniformity among regional images, and performing dot multiplication calculation on the adjacent matrix of the gray level images and the sampling adjacent matrix to obtain a variation consistency adjacent matrix;
s4: performing distributed computation on the image data:
and setting a category threshold, classifying the regional image according to the category threshold and the data in the change consistency adjacent matrix, and recognizing the image to be recognized by adopting distributed calculation according to the classified image data.
2. The distributed computing method for image recognition according to claim 1, wherein the process of obtaining the area image in the gray image by using the gray value of each pixel point in the gray image and the seed growth method is as follows:
s1-1: counting the gray value of the gray image to obtain a gray histogram, and performing Gaussian smoothing on the gray histogram to obtain a smoothed gray histogram;
s1-2: performing difference on the original gray level histogram of the image and the smoothed gray level histogram to obtain a difference value histogram, and obtaining pixel points corresponding to gray level values with the frequency of the pixel points larger than a quantity threshold value in the difference value histogram as seed points of the image;
s1-3: and dividing the image by using the seed point as a category center through a region growing method to obtain a divided region image.
3. The distributed computing method for image recognition according to claim 1, wherein the computing method for computing the region similarity between the regions according to the distance between the regions is as follows:
the Euclidean distance between every two areas is calculated, all the distances are normalized by using the distance maximum value of each area and other areas to obtain the area similarity, and the specific calculation formula is as follows:
wherein:indicate->Personal area and->Regional similarity of individual regions->Indicate->Maximum value of the distance of the individual region from the other region, < >>Indicate->Personal area and->Distance of individual area>、/>Sequence number of the divided area image, +.>
4. A distributed computing method for image recognition according to claim 3, wherein the method for computing the euclidean distance between two regions is:
calculating the gray entropy and the texture entropy of each area image, expressing the gray entropy and the texture entropy by using a binary group as the coordinates of the central point of the area image, and calculating the Euclidean distance between the area images according to the coordinates of the central point of the area image;
the gray entropy is the entropy of the gray values of all pixel points in the region image, and the texture entropy is the entropy of the data in the gray co-occurrence matrix of the region image.
5. A distributed computing method for image recognition according to claim 1, wherein the method of computing the rate of change doublet for each region is as follows:
and (3) respectively multiplying the difference value between the gray entropy and the texture entropy of each region image sampled and sampled at the previous time and the region repetition rate sampled and sampled at the previous time, and calculating the change rate of the gray entropy and the change rate of the texture entropy after the current sampling to obtain a binary vector as a change rate binary group, wherein the specific calculation formula is as follows:
wherein:for the rate of change of gray entropy +.>For the rate of change of texture entropy, +.>For the region repetition rate after sampling and before sampling, +.>Is the difference between the gray entropy of the sample and the previous sample, +.>The difference value of texture entropy between the sampled texture entropy and the previous texture entropy;
the region repetition rate is the ratio of the maximum overlapping area of the current sampling and the previous sampling of the region image to the region image area of the previous sampling.
6. A distributed computing method for image recognition according to claim 1, wherein the method for computing the uniformity between the two area images is: and matching the change rate sequences of the two region images by using a DTW algorithm, calculating the DTW distance between the two region images, and taking the DTW distance as the region uniformity between the two region images.
7. A distributed computing method for image recognition according to claim 1, wherein the method of classifying regional images according to the data in the category threshold and change consistency adjacency matrix is as follows:
setting a category threshold value, dividing the region images corresponding to the data which are larger than or equal to the category threshold value in the change consistency adjacent matrix into one category, and dividing the region images corresponding to the data which are smaller than the category threshold value into one category, so as to realize the classification of the region images.
8. A distributed computing system for image recognition, comprising an image processing module, an image data analysis module, a feature association module, characterized in that:
an image processing module: carrying out gray conversion on the image to be identified, and carrying out region segmentation on the gray histogram of the image to be identified to obtain a segmented region image;
and a matrix construction module: calculating Euclidean distance between different area images according to gray entropy and texture entropy of each area image in the gray image; calculating the region similarity between the region images according to the distance between the region images in the gray level image, and obtaining an adjacent matrix by using the region similarity between the image regions;
carrying out Gaussian pyramid sampling on each area image in the gray level image to obtain a sampled image of each area image after each sampling;
respectively calculating the difference value of the gray entropy and the texture entropy of each region image after each sampling and the region image sampled before, and calculating a change rate binary set of each region image after each sampling according to the difference value of the gray entropy and the texture entropy of each region image after each sampling and the repetition rate of the region image after each sampling, wherein the change rate binary set comprises the change rate of the gray entropy and the change rate of the texture entropy;
counting the change rate binary groups of each image area obtained after each sampling and the previous sampling to form a change sequence of each area image, and calculating the similarity between every two area images according to the change sequence of each area image;
obtaining a sampling adjacent matrix according to the regional uniformity among regional images, and performing dot multiplication calculation on the adjacent matrix of the gray level images and the sampling adjacent matrix to obtain a variation consistency adjacent matrix;
and a characteristic association module: and setting a category threshold, classifying the regional image according to the category threshold and the data in the change consistency adjacent matrix, and classifying the regional image features with strong relevance into one type for recognition by adopting distributed computation according to the classified image data.
CN202210645370.5A 2022-06-08 2022-06-08 Distributed computing method and system for image recognition Active CN115131296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210645370.5A CN115131296B (en) 2022-06-08 2022-06-08 Distributed computing method and system for image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210645370.5A CN115131296B (en) 2022-06-08 2022-06-08 Distributed computing method and system for image recognition

Publications (2)

Publication Number Publication Date
CN115131296A CN115131296A (en) 2022-09-30
CN115131296B true CN115131296B (en) 2024-02-27

Family

ID=83378063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210645370.5A Active CN115131296B (en) 2022-06-08 2022-06-08 Distributed computing method and system for image recognition

Country Status (1)

Country Link
CN (1) CN115131296B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950364A (en) * 2010-08-30 2011-01-19 西安电子科技大学 Remote sensing image change detection method based on neighbourhood similarity and threshold segmentation
CN109711228A (en) * 2017-10-25 2019-05-03 腾讯科技(深圳)有限公司 A kind of image processing method that realizing image recognition and device, electronic equipment
CN110443806A (en) * 2019-04-30 2019-11-12 浙江大学 A kind of transparent floating harmful influence image partition method of the water surface based on targets improvement processing
CN111553870A (en) * 2020-07-13 2020-08-18 成都中轨轨道设备有限公司 Image processing method based on distributed system
CN113642550A (en) * 2021-07-20 2021-11-12 南京红松信息技术有限公司 Entropy maximization card-smearing identification method based on pixel probability distribution statistics
CN113963041A (en) * 2021-08-30 2022-01-21 南京市晨枭软件技术有限公司 Image texture recognition method and system
CN114387201A (en) * 2021-04-08 2022-04-22 透彻影像科技(南京)有限公司 Cytopathic image auxiliary diagnosis system based on deep learning and reinforcement learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112008003959T5 (en) * 2008-07-31 2011-06-01 Hewlett-Packard Development Co., L.P., Houston Perceptual segmentation of images
DE112015002681B4 (en) * 2014-06-06 2022-09-29 Mitsubishi Electric Corporation IMAGE ANALYSIS METHOD, IMAGE ANALYZER, IMAGE ANALYZER SYSTEM AND PORTABLE IMAGE ANALYZER

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950364A (en) * 2010-08-30 2011-01-19 西安电子科技大学 Remote sensing image change detection method based on neighbourhood similarity and threshold segmentation
CN109711228A (en) * 2017-10-25 2019-05-03 腾讯科技(深圳)有限公司 A kind of image processing method that realizing image recognition and device, electronic equipment
CN110443806A (en) * 2019-04-30 2019-11-12 浙江大学 A kind of transparent floating harmful influence image partition method of the water surface based on targets improvement processing
CN111553870A (en) * 2020-07-13 2020-08-18 成都中轨轨道设备有限公司 Image processing method based on distributed system
CN114387201A (en) * 2021-04-08 2022-04-22 透彻影像科技(南京)有限公司 Cytopathic image auxiliary diagnosis system based on deep learning and reinforcement learning
CN113642550A (en) * 2021-07-20 2021-11-12 南京红松信息技术有限公司 Entropy maximization card-smearing identification method based on pixel probability distribution statistics
CN113963041A (en) * 2021-08-30 2022-01-21 南京市晨枭软件技术有限公司 Image texture recognition method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Improved window adaptive gray level co-occurrence matrix for extraction and analysis of texture characteristics of pulmonary nodules;Hao Chen等;《Computer Methods and Programs in Biomedicine》;全文 *
高光谱图像数据分布式分类处理方法研究;余意;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;全文 *

Also Published As

Publication number Publication date
CN115131296A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN110147760B (en) Novel efficient electric energy quality disturbance image feature extraction and identification method
CN102722712A (en) Multiple-scale high-resolution image object detection method based on continuity
CN113850281B (en) MEANSHIFT optimization-based data processing method and device
CN108805213B (en) Power load curve double-layer spectral clustering method considering wavelet entropy dimensionality reduction
CN112949572A (en) Slim-YOLOv 3-based mask wearing condition detection method
CN113221787A (en) Pedestrian multi-target tracking method based on multivariate difference fusion
CN109145704B (en) Face portrait recognition method based on face attributes
CN112712102A (en) Recognizer capable of simultaneously recognizing known radar radiation source individuals and unknown radar radiation source individuals
CN114067118B (en) Processing method of aerial photogrammetry data
CN112084842A (en) Hydrological remote sensing image target identification method based on depth semantic model
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN111369526A (en) Multi-type old bridge crack identification method based on semi-supervised deep learning
CN115131296B (en) Distributed computing method and system for image recognition
CN116403071B (en) Method and device for detecting few-sample concrete defects based on feature reconstruction
CN114742849B (en) Leveling instrument distance measuring method based on image enhancement
CN116958809A (en) Remote sensing small sample target detection method for feature library migration
Akther et al. Detection of Vehicle's Number Plate at Nighttime using Iterative Threshold Segmentation (ITS) Algorithm
Darshni et al. Artificial neural network based character recognition using SciLab
CN113392876B (en) Small sample image classification method based on graph neural network
CN113763315B (en) Slide image information acquisition method, device, equipment and medium
Tomar et al. A Comparative Analysis of Activation Function, Evaluating their Accuracy and Efficiency when Applied to Miscellaneous Datasets
CN114758224A (en) Garbage classification detection method based on deep learning
CN114970601A (en) Power equipment partial discharge type identification method, equipment and storage medium
Promsuk et al. Numerical Reader System for Digital Measurement Instruments Embedded Industrial Internet of Things.
CN113850274A (en) Image classification method based on HOG characteristics and DMD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240115

Address after: Room A1, 1122, No. 42 Shuangshan Avenue, Nansha District, Guangzhou City, Guangdong Province, 510000

Applicant after: Guangzhou Dongchao Intelligent Technology Co.,Ltd.

Address before: Room 605-2, Building A, Longgang Science and Technology Park, No. 1 Hengyuan Road, Economic and Technological Development Zone, Nanjing City, Jiangsu Province, 210000

Applicant before: Nanjing Xuanjing Lemin Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant