CN110008912B - Social platform matching method and system based on plant identification - Google Patents

Social platform matching method and system based on plant identification Download PDF

Info

Publication number
CN110008912B
CN110008912B CN201910286271.0A CN201910286271A CN110008912B CN 110008912 B CN110008912 B CN 110008912B CN 201910286271 A CN201910286271 A CN 201910286271A CN 110008912 B CN110008912 B CN 110008912B
Authority
CN
China
Prior art keywords
image
user
information
waveform
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910286271.0A
Other languages
Chinese (zh)
Other versions
CN110008912A (en
Inventor
李晨
左东昊
江昕阳
贾小琦
许宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910286271.0A priority Critical patent/CN110008912B/en
Publication of CN110008912A publication Critical patent/CN110008912A/en
Application granted granted Critical
Publication of CN110008912B publication Critical patent/CN110008912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a social platform matching method and a system based on plant identification; the method comprises the following steps: acquiring a plant leaf image shot by a registered user in a social platform and shooting information of the image; processing the plant leaf image to obtain a leaf segmentation image and a vein image; acquiring basic characteristics, waveform characteristics, skeleton characteristics and shooting angle information; inputting the basic features, the waveform features and the skeleton features into a pre-trained classification neural network to obtain the leaf type information; acquiring at least one user as a matching result by utilizing the blade type information, the user registration information, the shooting angle information and the pre-trained matching neural network of the user to be matched and all the users; the method has the advantages that the segmentation result is close to the real blade shape, the noise point is small, the segmentation precision is high, the category identification result is accurate, and in addition, the social platform user matching combines the category identification result and the user information, so that the method has better practicability.

Description

Social platform matching method and system based on plant identification
Technical Field
The invention belongs to the technical field of social contact, and particularly relates to a social platform matching method and system based on plant identification.
Background
The existing segmentation algorithm for researching plant leaf images under complex backgrounds utilizes the ultragreen characteristic (2G-R-B) to segment the images, is sensitive to leaf surface reflection caused by illumination and shadow in the images, is easily influenced by light, and is ideal only when the background is single and the color difference is large to extract an interested area;
firstly, the existing plant leaf identification combines an ultragreen algorithm (EXG) and a bottom cap transformation algorithm to carry out image segmentation on a leaf and leaf photo with a complex background; as shown in fig. 6, the method comprises three steps: (1) And respectively carrying out ultragreen algorithm and bottom cap transformation processing on the collected original leaf images and obtaining binary images by utilizing an OTSU maximum inter-class variance algorithm. The method comprises the following steps of removing a background with a large difference between a G channel component and other two components in an RGB channel by using an EXG algorithm, removing the background with a small difference between the G channel component and a R, B channel by using bottom-cap conversion, and correcting the condition of uneven illumination by using bottom-cap conversion to obtain clearer blade edges and veins; (2) And obtaining a binary leaf image with obvious veins and edges by utilizing XOR operation. In the process of XOR operation, the ultragreen processing has no information such as veins and the like, and the information exists after the bottom cap is transformed, so that the image details such as the saw teeth, the veins and the like of the original blade image can be completely reserved; (3) And performing morphological processing and thinning segmentation on the result after the XOR operation. After the complete edge is obtained, morphological basic operations such as opening and closing operations and the like can be carried out, a watershed algorithm, a region growing algorithm and the like are utilized for segmentation to obtain the outline of the target leaf, and a final segmentation result can be obtained after point multiplication is carried out on the outline of the target leaf and the original leaf image matrix; when the ultragreen algorithm result and the bottom-cap transformation algorithm result are subjected to exclusive-or operation, the original leaves are easily segmented due to the existence of veins, and the segmentation result of partial images becomes half leaves, so that the defect of an interested area is caused, the method is not suitable for most leaves, the automation degree is not high, and the condition of excessive segmentation is caused.
Secondly, as shown in fig. 7, when the users are matched, a method of character matching is generally adopted at present for unfamiliar user matching, for example, character testing is performed, and the users are matched according to a character testing result and a limiting condition; the existing unfamiliar user matching method is simple in classification, lacks of learning function or is poor in learning capacity, and cannot perform personalized unfamiliar user matching on users; when the matching requirements of the user change, corresponding matching cannot be carried out in time; leaf information is not associated with unfamiliar user matches.
Disclosure of Invention
Technical problem to be solved
In order to solve the technical problems that in the prior art, leaf segmentation effect is poor, type identification accuracy is low, and objects are not matched according to plant type information when users are matched, on one hand, the invention provides a social platform matching method based on plant identification, and on the other hand, the invention provides a social platform matching system based on plant identification.
(II) technical scheme
In order to achieve the purpose, the main technical scheme of the method comprises the following steps:
s1, acquiring a plant leaf image shot by a registered user in a social platform and shooting information of the image;
s2, processing the plant leaf image to obtain a leaf segmentation image and a vein image;
s3, extracting features of the leaf segmentation image and the vein image, obtaining basic features and waveform features of the leaf segmentation image and skeleton features of the vein image, and obtaining shooting angle information of the plant leaf image according to the basic features;
s4, inputting the basic features, the waveform features and the skeleton features into a pre-trained classification neural network to obtain blade type information;
s5, standardizing the blade type information, the user registration information, the shooting angle information and the shooting information of the user to be matched and all users, inputting the standardized information into a pre-trained matching neural network, obtaining the matching degree of the user to be matched and all users, and recommending at least one user as a matching result for the user to be matched according to the matching degree;
the shooting information comprises shooting time and shooting place of the plant leaf image, and the user registration information comprises character, gender and age of the user.
Optionally, before the step S1, the method further includes:
s0, using pre-acquired basic feature, waveform feature and skeleton feature sample data as input of a pre-constructed classification neural network model, using pre-marked plant species as output of the pre-constructed classification neural network model, and acquiring a pre-trained classification neural network.
Alternatively, in step S2, acquiring the leaf segmentation image and the vein image includes:
s21, aiming at the plant leaf image, respectively utilizing a super green algorithm process and an HSV algorithm to obtain a super green image and an HSV image;
s22, obtaining a growth image by utilizing a region growth algorithm aiming at the ultragreen image;
s23, carrying out threshold segmentation on the ultragreen image to obtain a threshold segmentation image, and comparing the area of the growing image with the area of the region of interest in the threshold segmentation image;
if the area of the growing image is smaller than half of the number of pixel points of the growing image or larger than twice of the area of the region of interest in the threshold segmentation image; adjusting the threshold, the region growing step length and the seed point, and returning to the step S22;
otherwise, taking the current growing image as the blade segmentation image;
and S24, sequentially carrying out dot multiplication, graying and rotation on the blade segmentation image and the HSV image to obtain an HSV gray image, and carrying out edge extraction on the HSV gray image to obtain the vein image.
Optionally, the acquiring the waveform feature in step S3 includes:
s31, obtaining an original waveform according to the distance from the point of the blade edge in the blade segmentation image to the gravity center of the blade;
s32, aiming at the original waveform, acquiring a shape waveform by utilizing one of Gaussian filtering, curve fitting or wavelet transformation processing;
s33, subtracting the shape waveform from the original waveform to obtain a blade edge information wave;
s34, respectively dividing the shape waveform and the blade edge information wave into 64 equal parts, and taking 128 obtained data as the waveform characteristics.
Optionally, step S3 further includes:
and taking the ratio of the number of skeleton branches, the number of skeleton pixel points and the square root of the minimum circumscribed rectangle area of the vein image as the skeleton characteristic.
Optionally, step S3 further includes:
extracting a plurality of features as the basic features for the blade segmentation image;
and taking the intersection angle of the long axis of the ellipse of the basic feature and the interested area with the same standard second-order central moment and the x axis as the shooting angle information.
Optionally, in step S5, the normalizing process of the input of the pre-trained matching neural network includes:
changing the age difference interval of the user to 0-1;
representing the gender of the user by 0.1 and 1, wherein the same is 1, and the different is 0.1;
marking plants of the same class as 1, plants of the same genus as 0.5, plants of the same family as 0.25, plants of the same target as 0.125, plants of the same class as 0.0625, and plants of the remaining classes as 0;
respectively acquiring a time difference T, a distance difference D of a shooting place and a shooting angle difference theta by using the following formulas I to III;
the formula I is as follows:
Figure BDA0002023375090000041
the formula II is as follows:
Figure BDA0002023375090000042
the formula III is as follows:
Figure BDA0002023375090000043
wherein T is a difference value of two user picture shooting times, the unit of T is hour, and if T is less than 0, the T =0; d is the difference value of the shooting distances of two user pictures, the unit of D is kilometer, and if D is less than 0, D =0; alpha is the intersection angle of the long axis of the ellipse with the same standard second-order central moment as the interested area and the x axis, alpha unit is degree, and the interval is alpha epsilon (-90,90).
Optionally, the pre-trained matching neural network comprises: the device comprises an input layer, a first hidden layer, a second hidden layer and an output layer, wherein the pre-trained matching neural network is trained by adopting an incremental learning method;
the input layer inputs data of 0-1;
the first hidden layer is a convolution layer, four filters 1*4 carry out convolution operation with the step length of 1, a nonlinear active layer is passed through after the convolution operation is finished, and a ReLU function is used as an active function;
the second hidden layer is a convolution layer, four filters 1*4 carry out convolution operation with the step length of 1, a nonlinear activation layer is passed through after the convolution operation is finished, and a ReLU function is used as an activation function;
the output layer is a convolution layer, four filters 1*4 carry out convolution operation with the step length of 1, and after the convolution operation is finished, the convolution operation passes through a nonlinear activation layer, and the tanh function serves as an activation function.
The main technical scheme of the system comprises the following steps:
the device comprises an acquisition unit, a storage unit, an image processing unit, a feature extraction unit and a user matching unit;
the acquisition unit is used for acquiring registration information of a user, an image of the plant leaves shot by the user and shooting information of the image;
the image processing unit is used for generating a leaf segmentation image and a vein image according to the plant leaf image and acquiring shooting angle information of the plant leaf image according to the basic characteristics;
the feature extraction unit is used for generating basic features and waveform features according to the leaf segmentation images and generating skeleton features according to the vein images;
the user matching unit is used for recommending at least one user for the user to be matched as a matching result according to the requirement of the user to be matched;
the storage unit is used for storing the registration information of the user, the plant leaf image shot by the user and the shooting information of the image, the shooting angle information, the basic feature, the waveform feature and the skeleton feature;
wherein the user registration information includes a character, a gender and an age of the user.
Optionally, the feature extraction unit is further configured to generate an original waveform from a distance from a point of a blade edge in the blade segmentation image to a center of gravity of the blade, generate a shape waveform from the original waveform, generate a blade edge information wave from the original waveform and the shape waveform, and generate the waveform feature from the shape waveform and the blade edge information wave. (III) advantageous effects
The method has the beneficial effects that: firstly, an ideal segmentation result which is closer to a real blade shape can be obtained within reasonable segmentation times, meanwhile, the final result has no large noise, the segmentation precision is greatly improved, and good preparation is made for steps such as feature extraction and image classification; secondly, the social platform user matching combines the category identification result and the user information, so that the social platform user matching has better practicability.
The system of the invention has the beneficial technical effects that: the precision of leaf segmentation and classification is improved, and the processes of plant leaf segmentation, feature extraction, classification and user matching which are more accurate and have higher degree of automation are formed.
Drawings
Fig. 1 is a schematic flowchart of a social platform matching method based on plant identification according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a leaf segmentation image and a vein image generated by using a plant leaf image according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of generating a base feature by using a leaf segmentation image according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of generating skeleton features using vein images according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of a method for generating waveform features using a segmented leaf image according to an embodiment of the present invention;
FIG. 4 is a user matching graph provided by an embodiment of the invention;
FIG. 5 is a schematic structural diagram of a matching neural network according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a conventional plant leaf identification method;
fig. 7 is a flowchart illustrating a conventional strange user matching method.
Detailed Description
For a better understanding of the present invention, reference will now be made in detail to the present embodiments of the invention, which are illustrated in the accompanying drawings.
Example one
As shown in fig. 1, the embodiment provides a social platform matching method based on plant identification, which specifically includes the following steps:
s0, using pre-acquired basic feature, waveform feature and skeleton feature sample data as input of a pre-constructed classification neural network model, using pre-marked plant species as output of the pre-constructed classification neural network model, and acquiring a pre-trained classification neural network, wherein the classification neural network is used for identifying the species of plants;
s1, acquiring a plant leaf image shot by a registered user in a social platform and shooting information of the image;
s2, processing the plant leaf image a to obtain a leaf segmentation image d and a vein image g; as shown in fig. 2, the specific steps include:
s21, aiming at the plant leaf image a, respectively utilizing an ultragreen algorithm to process and an HSV algorithm to obtain an ultragreen image b and an HSV image e; for example, the HSV image in the present embodiment may be replaced with a HIS, YUV, or YcbCr image, which is only exemplified by HSV images in the present embodiment;
s22, acquiring a growth image c by utilizing a region growth algorithm aiming at the ultragreen image b;
s23, performing threshold segmentation on the ultragreen image b to obtain a threshold segmentation graph, and comparing the area of the growing image c with the area of the region of interest in the threshold segmentation graph;
if the area of the growing image c is smaller than half of the number of pixel points of the growing image or larger than twice of the area of the region of interest in the threshold segmentation image; adjusting the threshold, the region growing step length and the seed point, and returning to the step S22;
otherwise, taking the current growth image c as a blade segmentation image d;
and S24, sequentially carrying out point multiplication, graying and rotation on the blade segmentation image c and the HSV image to obtain an HSV gray image f, and carrying out edge extraction on the HSV gray image f to obtain a vein image g.
Specifically, the characteristic extraction of vein information is introduced, the HSV image and the ultragreen image are combined through the characteristic extraction, an ideal segmentation result which is closer to a real leaf shape can be obtained within reasonable segmentation times, meanwhile, the final result does not have large noise, the segmentation precision is greatly improved, and good preparation is made for the steps of characteristic extraction, image classification and the like;
s3, extracting features of the leaf segmentation image d and the vein image g, acquiring basic features and waveform features of the leaf segmentation image d and skeleton features of the vein image g, and acquiring shooting angle information of the plant leaf image according to the basic features;
as shown in fig. 3a and 3b, a plurality of features are extracted from the segmented image d of the blade as basic features, for example, 21 features are extracted from the segmented image d of the blade, which specifically include: the total number of pixels in each region of the image, the minimum rectangle containing the corresponding region, the centroid (center of gravity) of each region, the length of the major axis of an ellipse having the same standard second-order central moment as the region (in the pixel sense), the length of the minor axis of an ellipse having the same standard second-order central moment as the region (in the pixel sense), the eccentricity of an ellipse having the same standard second-order central moment as the region (which may be characterized), the angle (degree) of intersection of the major axis of an ellipse having the same standard second-order central moment as the region and the x-axis, a logical matrix having the same size as a region, a filled logical matrix having the same size as a region, the number of on pixels in a filled region image, the minimum convex polygon containing a region, the minimum convex polygon of the region drawn, the number of on pixels in a filled region convex polygon image, one topological invariant in a geometric topology — the euler number, the eight-directional region extreme point, the diameter of a circle having the same area as the region, the pixel proportion in a region and the minimum convex polygon of the region simultaneously, the pixel proportion of the region in a region in the region and the minimum boundary thereof, a subscript of the rectangle, a pixel proportion of the pixel of the index of the region, and a corresponding pixel of the index of the region, and a perimeter of the region of the minimum boundary of the minimum region; and taking the ratio of the number of skeleton branches, the number of skeleton pixel points and the square root of the minimum circumscribed rectangle area of the vein image g as the skeleton characteristic.
As shown in fig. 3c, the step of acquiring the waveform characteristics in step S3 includes the following steps:
s31, obtaining an original waveform according to the distance from the point of the edge of the blade in the blade segmentation image d to the gravity center of the blade;
s32, aiming at the original waveform, acquiring a shape waveform by utilizing one of Gaussian filtering, curve fitting or wavelet transformation processing;
s33, subtracting the shape waveform from the original waveform to obtain a blade edge information wave;
s34, the shape waveform and the blade edge information wave are divided into 64 equal parts, and 128 pieces of data obtained are used as waveform characteristics.
And taking the intersection angle of the long axis of the ellipse of the basic characteristic with the same standard second-order central moment as the interested area and the x axis as the shooting angle information.
S4, inputting the basic features, the waveform features and the skeleton features into a pre-trained classification neural network to obtain blade type information; for example, the present embodiment may also adopt an SVM method to obtain the information of the types of the leaves; the method describes the branching condition of the leaf vein of the tree leaves by using a brand new characteristic, and the plant species identification is more accurate. (ii) a
S5, standardizing the blade type information, the user registration information, the shooting angle information and the shooting information of the user to be matched and all users, inputting the information into a pre-trained matching neural network, obtaining the matching degree of the user to be matched and all users, and recommending at least one user as a matching result for the user to be matched according to the matching degree;
as shown in fig. 4, the photographing information includes photographing time and photographing place of the plant leaf image, and the user registration information includes the character, sex and age of the user; for example, in the user matching step, the social platform user matching has better practicability by combining the category identification result and the user information;
specifically, for example, the normalization process of the input data of the matching neural network in the embodiment includes:
changing the age difference interval of the user to 0-1;
representing the gender of the user by 0.1 and 1, wherein the same is 1, and the different is 0.1; marking plants of the same class as 1, plants of the same genus as 0.5, plants of the same family as 0.25, plants of the same purpose as 0.125, plants of the same class as 0.0625, and plants of the remaining classes as 0;
respectively obtaining a time difference value T, a distance difference value D of a shooting place and a shooting angle difference value theta by using the following formulas 1 to 3;
equation 1:
Figure BDA0002023375090000091
equation 2:
Figure BDA0002023375090000101
equation 3:
Figure BDA0002023375090000102
wherein T is a difference value of two user picture shooting times, the unit of T is hour, and if T is less than 0, the T =0; d is the difference value of the shooting distances of two user pictures, the unit of D is kilometer, and if D is less than 0, D =0; alpha is the intersection angle of the long axis of the ellipse with the same standard second-order central moment as the interested area and the x axis, the unit of alpha is degree, and the interval is alpha from (-90,90);
for example, in this embodiment, the personality information in the user registration information is obtained through the personality test, a nine-type personality test may be provided for the user during user registration, and if the user personality is the same, the output is 1, and if the user personality is different or at least one user does not perform the personality test, the output is 0.
As shown in fig. 5, each user data is stored by the server, but is not matched; for example, when the user 1 wants to contact with an unfamiliar user, the matching degree between the user 1 and the user 2 or the user 3 needs to be calculated, and finally, other users are sorted according to the matching degree and put into the recommendation list of the user 1;
the pre-trained matching neural network of the present embodiment includes: the input layer, the first hidden layer, the second hidden layer and the output layer, and the pre-trained matching neural network is trained by adopting an incremental learning method;
inputting data of 0-1 in the input layer;
the first hidden layer is a convolution layer, four filters 1*4 carry out convolution operation with the step length of 1, a nonlinear active layer is passed after the convolution operation is finished, and a ReLU function is used as an active function;
the second hidden layer is a convolution layer, four filters 1*4 carry out convolution operation with the step length of 1, a nonlinear active layer is passed after the convolution operation is finished, and a ReLU function is used as an active function;
the output layer is a convolution layer, four filters 1*4 carry out convolution operation with step size 1, and after the convolution operation is finished, the convolution operation passes through a nonlinear active layer, and the tanh function is used as an active function.
Example two
The embodiment provides a social platform matching system based on plant identification, which specifically comprises:
the device comprises an acquisition unit, a storage unit, an image processing unit, a feature extraction unit and a user matching unit;
the acquisition unit is used for acquiring registration information of a user, an image of the plant leaves shot by the user and shooting information of the image;
the image processing unit is used for generating a leaf segmentation image and a vein image according to the plant leaf image and acquiring shooting angle information of the plant leaf image according to the basic characteristics;
the feature extraction unit is used for generating basic features and waveform features according to the leaf segmentation images and generating skeleton features according to the vein images;
the user matching unit is used for recommending at least one user for the user to be matched as a matching result according to the requirement of the user to be matched;
the storage unit is used for storing registration information of a user, plant leaf images shot by the user, shooting information of the images, shooting angle information, basic features, waveform features and skeleton features;
wherein the user registration information includes the personality, sex and age of the user.
Preferably, the feature extraction unit is further configured to generate an original waveform from a distance from a point of the blade edge in the blade segmentation image to the center of gravity of the blade, generate a shape waveform from the original waveform, generate a blade edge information wave from the original waveform and the shape waveform, and generate the waveform feature from the shape waveform and the blade edge information wave.
Finally, it should be noted that: the above-mentioned embodiments are only used for illustrating the technical solution of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A social platform matching method based on plant identification is characterized by comprising the following steps:
s1, acquiring a plant leaf image shot by a registered user in a social platform and shooting information of the image;
s2, processing the plant leaf image to obtain a leaf segmentation image and a vein image;
s3, extracting the features of the blade segmentation image to obtain the basic features and the waveform features of the blade segmentation image; extracting features of the vein images to obtain skeleton features of the vein images, and obtaining shooting angle information of the plant leaf images according to the basic features;
specifically, the intersection angle of the long axis of the ellipse of the basic feature, which has the same standard second-order central moment as the region of interest, and the x axis is used as the shooting angle information;
s4, inputting the basic features, the waveform features and the skeleton features into a pre-trained classification neural network to obtain blade type information;
s5, standardizing the blade type information, the user registration information, the shooting angle information and the shooting information of the user to be matched and all users, inputting the standardized information into a pre-trained matching neural network, obtaining the matching degree of the user to be matched and all users, and recommending at least one user as a matching result for the user to be matched according to the matching degree;
the shooting information comprises the shooting time and the shooting place of the plant leaf image, and the user registration information comprises the character, the gender and the age of the user;
acquiring a leaf segmentation image and a vein image includes:
s21, aiming at the plant leaf image, respectively utilizing a super green algorithm process and an HSV algorithm to obtain a super green image and an HSV image;
s22, obtaining a growth image by utilizing a region growth algorithm aiming at the ultragreen image;
s23, carrying out threshold segmentation on the ultragreen image to obtain a threshold segmentation image, and comparing the area of the growing image with the area of the region of interest in the threshold segmentation image;
if the area of the growing image is smaller than half of the number of the pixel points of the growing image or larger than twice of the area of the region of interest in the threshold segmentation image; adjusting the threshold, the region growing step length and the seed point, and returning to the step S22;
otherwise, taking the current growing image as the blade segmentation image;
and S24, sequentially carrying out dot multiplication, graying and rotation on the blade segmentation image and the HSV image to obtain an HSV gray image, and carrying out edge extraction on the HSV gray image to obtain the vein image.
2. The method of claim 1, further comprising, before step S1:
and S0, using pre-acquired basic feature, waveform feature and skeleton feature sample data as input of a pre-constructed classification neural network model, using pre-marked plant species as output of the pre-constructed classification neural network model, and acquiring a pre-trained classification neural network.
3. The method of claim 1, wherein acquiring the waveform characteristics in step S3 comprises:
s31, obtaining an original waveform according to the distance from the point of the blade edge in the blade segmentation image to the gravity center of the blade;
s32, aiming at the original waveform, acquiring a shape waveform by utilizing one of Gaussian filtering, curve fitting or wavelet transformation processing;
s33, subtracting the shape waveform from the original waveform to obtain a blade edge information wave;
s34, respectively dividing the shape waveform and the blade edge information wave into 64 equal parts, and taking 128 obtained data as the waveform characteristics.
4. The method of claim 3, wherein step S3 further comprises:
and taking the ratio of the number of skeleton branches, the number of skeleton pixel points and the square root of the minimum circumscribed rectangle area of the vein image as the skeleton characteristic.
5. The method of claim 4, wherein in step S5, the normalization process of the inputs to the pre-trained matching neural network comprises:
changing the age difference interval of the user to 0-1;
representing the gender of the user by 0.1 and 1, wherein the same is 1, and the different is 0.1;
marking plants of the same class as 1, plants of the same genus as 0.5, plants of the same family as 0.25, plants of the same purpose as 0.125, plants of the same class as 0.0625, and plants of the remaining classes as 0;
respectively acquiring a time difference T, a distance difference D of a shooting place and a shooting angle difference theta by using the following formulas I to III;
the formula I is as follows:
Figure FDA0004020082270000031
the formula II is as follows:
Figure FDA0004020082270000032
the formula III is as follows:
Figure FDA0004020082270000033
wherein T is a difference value of two user picture shooting times, the unit of T is hour, and if T is less than 0, T =0; d is the difference value of the shooting distances of two user pictures, the unit of D is kilometer, and if D is less than 0, D =0; alpha is the intersection angle of the long axis of the ellipse with the same standard second-order central moment as the interested area and the x axis, alpha unit is degree, and the interval is alpha epsilon (-90,90).
6. The method of any one of claims 1-5, wherein the pre-trained matching neural network comprises: the device comprises an input layer, a first hidden layer, a second hidden layer and an output layer, wherein the pre-trained matching neural network is trained by adopting an incremental learning method;
the input layer inputs data of 0-1;
the first hidden layer is a convolution layer, four filters 1*4 carry out convolution operation with the step length of 1, a nonlinear active layer is passed through after the convolution operation is finished, and a ReLU function is used as an active function;
the second hidden layer is a convolution layer, four filters 1*4 carry out convolution operation with the step length of 1, a nonlinear activation layer is passed through after the convolution operation is finished, and a ReLU function is used as an activation function;
the output layer is a convolution layer, four filters 1*4 carry out convolution operation with the step length of 1, and after the convolution operation is finished, the convolution operation passes through a nonlinear activation layer, and the tanh function serves as an activation function.
7. A social platform matching system based on plant identification, comprising:
the device comprises an acquisition unit, a storage unit, an image processing unit, a feature extraction unit and a user matching unit;
the acquisition unit is used for acquiring registration information of a user, an image of the plant leaves shot by the user and shooting information of the image;
the image processing unit is used for generating a leaf segmentation image and a vein image according to the plant leaf image,
the feature extraction unit is used for generating basic features and waveform features according to the leaf segmentation images and generating skeleton features according to the vein images; the image processing unit is used for acquiring shooting angle information of the plant leaf image according to the basic features; specifically, the intersection angle of the long axis of the ellipse of the basic feature, which has the same standard second-order central moment as the region of interest, and the x axis is used as the shooting angle information; inputting the basic features, the waveform features and the skeleton features into a pre-trained classification neural network to obtain blade type information;
the user matching unit is used for carrying out standardization on the blade type information, the user registration information, the shooting angle information and the shooting information of the user to be matched and all users and then inputting the information into a pre-trained matching neural network, obtaining the matching degree of the user to be matched and all users, and recommending at least one user as a matching result for the user to be matched according to the matching degree;
the storage unit is used for storing the registration information of the user, the plant leaf image shot by the user and the shooting information of the image, the shooting angle information, the basic feature, the waveform feature and the skeleton feature;
wherein the user registration information comprises the character, the sex and the age of the user;
the image processing unit acquiring the leaf segmentation image and the vein image includes:
s21, aiming at the plant leaf image, respectively utilizing an ultragreen algorithm process and an HSV algorithm to obtain an ultragreen image and an HSV image;
s22, obtaining a growth image by utilizing a region growth algorithm aiming at the ultragreen image;
s23, carrying out threshold segmentation on the ultragreen image to obtain a threshold segmentation image, and comparing the area of the growing image with the area of the region of interest in the threshold segmentation image;
if the area of the growing image is smaller than half of the number of the pixel points of the growing image or larger than twice of the area of the region of interest in the threshold segmentation image; adjusting the threshold, the region growing step length and the seed point, and returning to S22;
otherwise, taking the current growing image as the blade segmentation image;
and S24, sequentially carrying out dot multiplication, graying and rotation on the blade segmentation image and the HSV image to obtain an HSV gray image, and carrying out edge extraction on the HSV gray image to obtain the vein image.
8. The system of claim 7, wherein the feature extraction unit is further to:
acquiring an original waveform according to the distance from the point of the blade edge in the blade segmentation image to the gravity center of the blade;
acquiring a shape waveform by utilizing one of Gaussian filtering, curve fitting or wavelet transformation processing aiming at the original waveform;
subtracting the shape waveform from the original waveform to obtain a blade edge information wave;
the shape waveform and the blade edge information wave were each divided into 64 equal parts and 128 pieces of data obtained were taken as the waveform characteristics.
CN201910286271.0A 2019-04-10 2019-04-10 Social platform matching method and system based on plant identification Active CN110008912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910286271.0A CN110008912B (en) 2019-04-10 2019-04-10 Social platform matching method and system based on plant identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910286271.0A CN110008912B (en) 2019-04-10 2019-04-10 Social platform matching method and system based on plant identification

Publications (2)

Publication Number Publication Date
CN110008912A CN110008912A (en) 2019-07-12
CN110008912B true CN110008912B (en) 2023-04-18

Family

ID=67170933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910286271.0A Active CN110008912B (en) 2019-04-10 2019-04-10 Social platform matching method and system based on plant identification

Country Status (1)

Country Link
CN (1) CN110008912B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533605B (en) * 2019-07-26 2023-06-02 遵义师范学院 Accurate noise point calibration method
CN112542163B (en) * 2019-09-04 2023-10-27 百度在线网络技术(北京)有限公司 Intelligent voice interaction method, device and storage medium
CN111833367A (en) * 2020-06-24 2020-10-27 中国第一汽车股份有限公司 Image processing method and device, vehicle and storage medium
CN111860330B (en) * 2020-07-21 2023-08-11 陕西工业职业技术学院 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599925A (en) * 2016-12-19 2017-04-26 广东技术师范学院 Plant leaf identification system and method based on deep learning
CN109325529A (en) * 2018-09-06 2019-02-12 安徽大学 Sketch identification method and application of sketch identification method in commodity retrieval

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984775A (en) * 2014-06-05 2014-08-13 网易(杭州)网络有限公司 Friend recommending method and equipment
CN104281650A (en) * 2014-09-15 2015-01-14 南京锐角信息科技有限公司 Friend search recommendation method and friend search recommendation system based on interest analysis
US20180246899A1 (en) * 2017-02-28 2018-08-30 Laserlike Inc. Generate an index for enhanced search based on user interests
US10380650B2 (en) * 2017-07-26 2019-08-13 Jehan Hamedi Systems and methods for automating content design transformations based on user preference and activity data
CN108182228A (en) * 2017-12-27 2018-06-19 北京奇虎科技有限公司 User social contact method, device and the computing device realized using augmented reality
CN109241454B (en) * 2018-07-18 2021-08-24 广东工业大学 Interest point recommendation method fusing social network and image content
CN109359675B (en) * 2018-09-28 2022-08-12 腾讯科技(武汉)有限公司 Image processing method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599925A (en) * 2016-12-19 2017-04-26 广东技术师范学院 Plant leaf identification system and method based on deep learning
CN109325529A (en) * 2018-09-06 2019-02-12 安徽大学 Sketch identification method and application of sketch identification method in commodity retrieval

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
B.SATHYA BAMA.et al ."CONTENT BASED LEAF IMAGE RETRIEVAL (CBLIR) USING SHAPE, COLOR AND TEXTURE FEATURES".《IJCSE》.2011,第2卷(第2期),全文. *
翟传敏等."基于形状上下文特征的植物叶片图像匹配方法".《广西师范大学学报:自然科学版》.2009,第27卷(第3期),全文. *

Also Published As

Publication number Publication date
CN110008912A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110008912B (en) Social platform matching method and system based on plant identification
CN109154978B (en) System and method for detecting plant diseases
Tan et al. Automatic extraction of built-up areas from panchromatic and multispectral remote sensing images using double-stream deep convolutional neural networks
CN109002755B (en) Age estimation model construction method and estimation method based on face image
Deng et al. Cloud detection in satellite images based on natural scene statistics and gabor features
Chawathe Rice disease detection by image analysis
CN109977899B (en) Training, reasoning and new variety adding method and system for article identification
CN111028923B (en) Digital pathological image staining normalization method, electronic device and storage medium
Yin et al. Crater detection based on gist features
Ahmad et al. Feature extraction of plant leaf using deep learning
CN108073947A (en) A kind of method for identifying blueberry kind
CN113066030A (en) Multispectral image panchromatic sharpening method and system based on space-spectrum fusion network
Bhimavarapu et al. Analysis and characterization of plant diseases using transfer learning
CN116543325A (en) Unmanned aerial vehicle image-based crop artificial intelligent automatic identification method and system
Mohtashamian et al. Automated plant species identification using leaf shape-based classification techniques: a case study on Iranian Maples
Li et al. A new combination classification of pixel-and object-based methods
CN117197450A (en) SAM model-based land parcel segmentation method
Huang et al. Research on crop planting area classification from remote sensing image based on deep learning
CN116452872A (en) Forest scene tree classification method based on improved deep pavv3+
CN110210574A (en) Diameter radar image decomposition method, Target Identification Unit and equipment
Zhang et al. A Mapping Approach for Eucalyptus Plantations Canopy and Single-Tree Using High-Resolution Satellite Images in Liuzhou, China
CN108664921A (en) Image-recognizing method and system based on bag of words under a kind of Android platform
Li et al. Crop region extraction of remote sensing images based on fuzzy ARTMAP and adaptive boost
Babu et al. A novel method based on chan vese segmentation for salient structure detection
Jagadesh et al. A Robust Skin Colour Segmentation Using Bivariate Pearson Type II [alpha][alpha](Bivariate Beta) Mixture Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant