CN113191436A - Talent image tag identification method and system and cloud platform - Google Patents
Talent image tag identification method and system and cloud platform Download PDFInfo
- Publication number
- CN113191436A CN113191436A CN202110496490.9A CN202110496490A CN113191436A CN 113191436 A CN113191436 A CN 113191436A CN 202110496490 A CN202110496490 A CN 202110496490A CN 113191436 A CN113191436 A CN 113191436A
- Authority
- CN
- China
- Prior art keywords
- image
- key
- sample
- visual information
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000000007 visual effect Effects 0.000 claims abstract description 145
- 238000012800 visualization Methods 0.000 claims description 33
- 238000012216 screening Methods 0.000 claims description 31
- 239000013598 vector Substances 0.000 claims description 18
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012512 characterization method Methods 0.000 claims description 5
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 4
- 241000282414 Homo sapiens Species 0.000 description 3
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a talent image label identification method, a system and a cloud platform. Therefore, through the relevance of each key graph sample, namely the difference problem of the weight of each key graph sample to the actual sample image to be recognized, of the sample image to be recognized is analyzed, so that the accuracy of talent image label recognition is improved, and talent image labels are determined from different visual key graph samples of images based on the image visual types of the key graph samples, so that the pertinence of difference judgment is improved in consideration of different judgment modes of the image visual types of the key graph samples, and the accuracy of talent image label recognition is further improved.
Description
Technical Field
The disclosure relates to the technical field of image recognition, in particular to a talent image tag recognition method, a system and a cloud platform.
Background
The talent image identification library can be regarded as an electronic file cabinet, namely a place for storing electronic files, and a user can perform operations such as adding, intercepting, updating, deleting and the like on data in the files. So-called "databases" are collections of data that are stored together in a manner that can be shared by multiple users, have as little redundancy as possible, and are independent of the application. And the related data is obtained after the image identification is identified by the related technology. Image recognition technology is an important technology in the information age, and is generated in order for a computer to process a large amount of physical information instead of a human. With the development of computer technology, human knowledge of image recognition technology is more and more profound. The image recognition technology comprises the steps of information acquisition, preprocessing, feature extraction and selection, classifier design and classification decision. The introduction of an image recognition technology, the technical principle, pattern recognition and the like are simply analyzed, and then the image recognition technology of a neural network, the image recognition technology of nonlinear dimension reduction and the application of the image recognition technology are introduced. Therefore, the application of the image processing technology is wide, the life of human beings cannot leave the image recognition technology, and the research on the image recognition technology has great significance.
However, in the process of image recognition, because the amount of image data is too large, some defects may exist in the image recognition.
Disclosure of Invention
In order to solve the technical problems existing in the background technology in the related art, the disclosure provides a talent image tag identification method, a system and a cloud platform.
The application provides a talent image tag identification method, which comprises the following steps:
respectively determining and obtaining the relevance of each key graphic sample of the sample image to be identified to the graphic representation state weight of the sample image to be identified in response to the key image feature vector of the sample image to be identified;
respectively acquiring the image visualization type of each key graph sample; wherein the image visualization type comprises visual information and non-visual information;
and determining talent image labels from the key graphic samples by using the image visualization types and the relevance of the key graphic samples.
Preferably, the step of determining respectively the relevance of each key graphic sample of the sample image to be identified to the graphic representation state weight of the sample image to be identified in response to the key image feature vector of the obtained sample image to be identified includes:
acquiring a first global graph representation state of each key graph sample and a unit graph representation state of a target key graph sample;
screening the first global graph representation state based on the unit graph representation state of the target key graph sample to obtain a second global state of the local key graph sample after the target key graph sample is screened;
carrying out similarity judgment on the first global graph representation state and the second global state to obtain a graph representation state difference between the first global graph representation state and the second global graph representation state;
determining the relevance of the target key graph sample based on the graph characterization state difference.
Preferably, the first global graph representation state includes a global upload data volume and a global success data volume, and the unit graph representation state includes a unit upload data volume and a unit success data volume; the step of screening the first global graph representation state based on the unit graph representation state of the target key graph sample to obtain a second global state of the local key graph sample after the target key graph sample is screened includes:
matching the unit upload data volume with the global upload data volume to obtain a local upload data volume, and matching the unit success data volume with the global success data volume to obtain a local success data volume;
and obtaining the second global state based on a comparison result between the local upload data volume and the local success data volume.
Preferably, the step of acquiring the first global graphic representation state of each key graphic sample and the unit graphic representation state of the target key graphic sample further includes:
acquiring image attribute data of each key graphic sample;
performing dimension reduction processing on the image attribute data based on the image attribute data type of each key graphic sample;
and acquiring a first global graph representation state of each key graph sample and a unit graph representation state of the target key graph sample based on the image attribute data after the dimension reduction processing.
Preferably, the step of determining the talent image tag from each key graphic sample by using the image visualization type and the relevance of each key graphic sample comprises:
acquiring key graphic samples of visual information and key graphic samples of non-visual information based on the image visualization type of each key graphic sample;
determining talent image labels from the key graphic samples of the non-visual information by utilizing a neural network training model;
and determining the talent image label from the key graph sample of the visual information by using an iterative screening method.
Preferably, after the step of determining the talent image label from the key pattern sample of non-visual information by using the neural network training model and the step of determining the talent image label from the key pattern sample of visual information by using the iterative screening method, the method further comprises:
identifying a difference image identification result of the talent image tag;
and selecting a display mode corresponding to the difference image recognition result for difference display.
Preferably, the step of determining the talent image label from the key graphic sample of the non-visual information by using the neural network training model comprises:
sequentially sequencing the relevance of the key graph samples of the non-visual information to obtain a relevance set of the key graph samples of the non-visual information;
judging whether the relevance set meets a normal distribution matrix;
if the relevance set meets a normal distribution matrix, determining a first difference interval through a neural network based on the relevance set, and determining a key graph sample corresponding to the relevance in the first difference interval as a talent image label of non-visual information;
and if the relevance set does not meet the normal distribution matrix, determining to obtain a second difference interval through a boxplot difference value method based on the relevance set, and determining the key graph sample corresponding to the relevance in accordance with the second difference interval as the talent image label of the non-visual information.
Preferably, the step of identifying the difference image identification result of the personnel image tag and selecting a display mode corresponding to the difference image identification result for difference display includes:
judging whether the global image identification result of the talent image label of the non-visual information exceeds a global threshold value or not;
if the global image identification result of the talent image label of the non-visual information exceeds the global threshold, determining that the difference image identification result of the talent image label of the non-visual information is a significant difference, and performing significant difference display on the talent image label of the non-visual information;
and if the global image identification result of the different non-visual information does not exceed the global threshold, determining that the difference image identification result of the talent image label of the non-visual information is a common difference, and performing common difference display on the talent image label of the non-visual information.
The application provides talent image tag identification system, including image acquisition end and cloud platform, the image acquisition end with cloud platform communication connection, the cloud platform includes:
the image feature determination module is used for respectively determining the relevance of each key graphic sample of the sample image to be identified to the graphic representation state weight of the sample image to be identified in response to the key image feature vector of the obtained sample image to be identified;
the image type acquisition module is used for respectively acquiring the image visualization type of each key graphic sample; wherein the image visualization type comprises visual information and non-visual information;
and the image label determining module is used for determining talent image labels from the key graphic samples by using the image visualization types and the relevance of the key graphic samples.
The application provides a cloud platform, including:
a processor, and
a memory and a network interface connected with the processor;
the network interface is connected with a nonvolatile memory in the intelligent equipment;
the processor retrieves a computer program from the non-volatile memory via the network interface when running, and runs the computer program via the memory to perform any of the above methods.
According to the scheme, after the key image feature vectors of the sample image to be identified are obtained, the relevance of the graphic representation state weight of each key graphic sample to the sample image to be identified is respectively determined, and then the talent image labels are determined from each key graphic sample based on the image visualization type and the relevance of each key graphic sample. Therefore, through the relevance of each key graph sample, namely the difference problem of the weight of each key graph sample to the actual sample image to be recognized, of the sample image to be recognized is analyzed, so that the accuracy of talent image label recognition is improved, and talent image labels are determined from different visual key graph samples of images based on the image visual types of the key graph samples, so that the pertinence of difference judgment is improved in consideration of different judgment modes of the image visual types of the key graph samples, and the accuracy of talent image label recognition is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic diagram of an architecture of a talent image tag identification system according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for identifying a talent image tag according to an embodiment of the present invention;
fig. 3 is a functional block diagram of a talent image tag identification apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
To facilitate the explanation of the method, system and cloud platform for identifying image tags of talents, please refer to fig. 1, which provides a schematic diagram of a communication architecture of a system 100 for identifying image tags of talents according to an embodiment of the present invention. The talent image tag identification system 100 may include an image capturing end 200 and a cloud platform 300, where the image capturing end 200 and the cloud platform 300 are in communication connection.
In a specific embodiment, the cloud platform 300 may be a desktop computer, a tablet computer, a notebook computer, a mobile phone, or another cloud platform capable of implementing data processing and data communication, which is not limited herein.
On the basis, please refer to fig. 2 in combination, which is a flowchart illustrating a method for identifying a talent image tag according to an embodiment of the present invention, where the method for identifying a talent image tag can be applied to the cloud platform 300 in fig. 1, and further the method for identifying a talent image tag specifically includes the following steps S21-S23.
Step S21, responding to the obtained key image feature vectors of the sample images to be identified, and respectively determining the relevance of each key graphic sample of the sample images to be identified to the graphic representation state weight of the sample images to be identified.
Illustratively, the key image feature vector represents the content of the sample image to be identified, which can be directly embodied in the sample image to be identified.
And step S22, respectively acquiring the image visualization type of each key graphic sample.
Illustratively, the image visualization type includes visual information as well as non-visual information.
And step S23, determining talent image labels from the key graphic samples by using the image visualization types and the relevance of the key graphic samples.
Illustratively, the talent image tag represents the target content obtained from the sample image to be identified.
It can be understood that, when the contents described in the above steps S21 to S23 are executed, after the key image feature vectors of the sample image to be identified are obtained, the relevance of the graphic representation state weight of each key graphic sample to the sample image to be identified is respectively determined, and then the talent image label is determined from each key graphic sample based on the image visualization type and the relevance of each key graphic sample. Therefore, through the relevance of each key graph sample, namely the difference problem of the weight of each key graph sample to the actual sample image to be recognized, of the sample image to be recognized is analyzed, so that the accuracy of talent image label recognition is improved, and talent image labels are determined from different visual key graph samples of images based on the image visual types of the key graph samples, so that the pertinence of difference judgment is improved in consideration of different judgment modes of the image visual types of the key graph samples, and the accuracy of talent image label recognition is further improved.
In an alternative embodiment, in response to the problem that the key image feature vector is unreliable when the key image feature vector of the sample image to be identified is acquired, so that it is difficult to reliably determine the relevance of each key graphic sample from which the sample image to be identified to the graphic representation state weight of the sample image to be identified, in order to improve the above technical problem, the step of determining the relevance of each key graphic sample from which the sample image to be identified to the graphic representation state weight of the sample image to be identified in response to the acquisition of the key image feature vector of the sample image to be identified described in step S21 may specifically include the following steps S211 to S214.
Step S211, obtaining a first global graph representation state of each key graph sample and a unit graph representation state of a target key graph sample.
Step S212, performing a screening process on the first global graph representation state based on the unit graph representation state of the target key graph sample, to obtain a second global state of the local key graph sample after the target key graph sample is screened.
Step S213, performing similarity judgment on the first global graph representation state and the second global state to obtain a graph representation state difference between the first global graph representation state and the second global graph representation state.
Step S214, determining the relevance of the target key graph sample based on the graph representation state difference.
It can be understood that, when the contents described in steps S211 to S214 are executed, in response to acquiring the key image feature vector of the sample image to be identified, the problem that the key image feature vector is not reliable is avoided, so that the relevance of each key image sample of the sample image to be identified to the image representation state weight of the sample image to be identified can be reliably determined respectively.
In an alternative embodiment, the first global graph characterization state includes a global upload data volume and a global success data volume, and the unit graph characterization state includes a unit upload data volume and a unit success data volume; in order to improve the above technical problem, the first global pattern representation state described in step S213 includes a global upload data amount and a global success data amount, and the unit pattern representation state includes a unit upload data amount and a unit success data amount; the step of performing screening processing on the first global graph representation state based on the unit graph representation state of the target key graph sample to obtain a second global state of the local key graph sample after the target key graph sample is screened may specifically include the contents described in the following step q1 and step q 2.
And q1, matching the unit upload data volume with the global upload data volume to obtain a local upload data volume, and matching the unit success data volume with the global success data volume to obtain a local success data volume.
Step q2, obtaining the second global state based on the comparison result between the local upload data volume and the local success data volume.
It is understood that, when performing the content described in the above steps q1 and q2, the first global graphic representation state includes a global upload data amount and a global success data amount, and the unit graphic representation state includes a unit upload data amount and a unit success data amount; when the first global graph representation state is screened based on the unit graph representation state of the target key graph sample, the problem of inaccurate screening is avoided, and therefore the second global state of the local key graph sample after the target key graph sample is screened can be accurately obtained.
Based on the above basis, the step of obtaining the first global graph representation state of each key graph sample and the unit graph representation state of the target key graph sample, which is described in step S211, may specifically include the contents described in the following steps w1 to w 3.
And step w1, acquiring the image attribute data of each key graphic sample.
And step w2, performing dimension reduction processing on the image attribute data based on the image attribute data type of each key graphic sample.
And step w3, acquiring the first global graph representation state of each key graph sample and the unit graph representation state of the target key graph sample based on the image attribute data after the dimension reduction processing.
It can be understood that, when the contents described in steps w 1-w 3 are executed, the problem of inaccuracy of image attribute data is avoided when the first global graph representation state of each key graph sample and the unit graph representation state of the target key graph sample are obtained, so that the first global graph representation state of each key graph sample and the unit graph representation state of the target key graph sample can be accurately obtained.
In an alternative embodiment, when the image visualization type and the relevance of each key graphic sample are used to determine the image tag of the talent from each key graphic sample, there is a problem that the relevance is inaccurate, so that it is difficult to accurately determine the image tag of the talent, and in order to improve the above technical problem, the step of determining the image tag of the talent from each key graphic sample by using the image visualization type and the relevance of each key graphic sample described in step S23 may specifically include the contents described in the following steps S231 to S233.
And step S231, acquiring key graphic samples of visual information and key graphic samples of non-visual information based on the image visualization type of each key graphic sample.
Step S232, determining talent image labels from the key graphic samples of the non-visual information by utilizing a neural network training model.
And step S233, determining talent image labels from the key graphic samples of the visual information by using an iterative screening method.
It can be understood that, when the contents described in the above steps S231 to S233 are executed, the problem of inaccurate association is avoided from the time of each key graphic sample by using the image visualization type of each key graphic sample and the association thereof, so that the talent image label can be accurately determined.
Based on the above basis, after the step of determining the talent image labels from the key graphic samples of the non-visual information by using the neural network training model and the step of determining the talent image labels from the key graphic samples of the visual information by using the iterative screening method, the following steps e1 and e2 are also included.
And e1, identifying the difference image identification result of the talent image label.
And e2, selecting a display mode corresponding to the difference image recognition result to perform difference display.
It can be understood that when the above-mentioned steps e1 and e2 are performed, the difference in the difference image recognition result can be effectively displayed, so that the error can be displayed in real time, and thus the error can be reduced, and the accuracy of the final result can be improved.
In an alternative embodiment, when the neural network training model is used to train the key graphic samples of the non-visual information, there is a problem that the sequential ordering of the relevance of the key graphic samples of the non-visual information is not accurate, so that it is difficult to accurately determine the talent image label, and in order to improve the above technical problem, the step of determining the talent image label from the key graphic samples of the non-visual information by using the neural network training model described in step S23 may specifically include the following steps r 1-r 4.
And r1, sequentially sequencing the relevance of the key graph samples of the non-visual information to obtain a relevance set of the key graph samples of the non-visual information.
And r2, judging whether the relevance set meets a normal distribution matrix.
And r3, if the relevance set meets a normal distribution matrix, determining a first difference interval through a neural network based on the relevance set, and determining a key graph sample corresponding to the relevance conforming to the first difference interval as a talent image label of non-visual information.
And r4, if the relevance set does not meet a normal distribution matrix, determining a second difference interval by a boxplot difference value method based on the relevance set, and determining a key graph sample corresponding to the relevance of the second difference interval as a talent image label of non-visual information.
It can be understood that when the contents described in the above steps r 1-r 4 are executed, the neural network training model is used to train the key graphic samples of the non-visual information, so as to avoid the problem that the sequential ordering of the relevance of the key graphic samples of the non-visual information is not accurate, and thus, the talent image label can be accurately determined.
In an alternative embodiment, when the difference image recognition result of the talent image tag is recognized, there is a problem that the image recognition result is incomplete, so that it is difficult to accurately obtain the difference image recognition result, and a display manner corresponding to the difference image recognition result cannot be selected for performing difference display, and in order to improve the above technical problem, the step of recognizing the difference image recognition result of the talent image tag and selecting the display manner corresponding to the difference image recognition result for performing difference display described in step e2 may specifically include the following steps t1 to t 3.
And step t1, judging whether the global image identification result of the talent image label of the non-visual information exceeds a global threshold value.
And t2, if the global image identification result of the talent image label of the non-visual information exceeds the global threshold, determining that the difference image identification result of the talent image label of the non-visual information is a significant difference, and performing significant difference display on the talent image label of the non-visual information.
And t3, if the global image recognition result of the different non-visual information does not exceed the global threshold, determining that the difference image recognition result of the talent image label of the non-visual information is a common difference, and performing common difference display on the talent image label of the non-visual information.
It can be understood that, when the difference image recognition result of the talent image tag is recognized while performing the above-described steps t 1-t 3, the problem of incomplete image recognition result is avoided, so that the difference image recognition result can be accurately obtained, and the display mode corresponding to the difference image recognition result can be accurately selected for difference display.
In an alternative embodiment, in order to improve the above technical problem, the step of determining the talent image label from the key graphic sample of the visual information by using the iterative screening method described in step S233 may specifically include the following steps a 1-a 3.
Step a1, the key pattern samples of each visual information are prioritized in turn based on the unit pattern representation state or relevance of the key pattern samples of each visual information, and a visual information set is obtained.
Step a2, screening the key graphic samples of the visual information from the visual information set according to the arrangement sequence of the visual information set, and acquiring a third global graphic representation state corresponding to the local key graphic samples after the key graphic samples of the visual information are screened.
Step a3, performing similarity judgment on the third global graph representation state and the preset graph representation state of each key graph sample to determine the talent image label of the visual information.
It can be understood that the problem of data disorder caused by excessive data is avoided from the key graph sample of the visual information by using an iterative screening method, so that the talent image label can be accurately determined.
In an alternative embodiment, when the similarity determination is performed on the third global graph representation state and the preset graph representation state, there is a problem that the similarity determination is inaccurate, so that it is difficult to accurately determine the talent image label of the visual information, and in order to improve the above technical problem, the step of performing the similarity determination on the third global graph representation state and the preset graph representation state to determine the talent image label of the visual information described in step a3 may specifically include the following content described in step d 1.
And d1, determining the key graph sample of the visual information corresponding to the third global graph representation state which is the same as the preset graph representation state as the talent image label of the visual information.
It can be understood that, when performing the content described in the above step d1 and performing similarity judgment on the third global graph representation state and the preset graph representation state, the problem of inaccurate similarity judgment is avoided, so that the talent image label of the visual information can be accurately determined.
Based on the above basis, the step of screening the key graphic samples of the visual information from the visual information set according to the arrangement order of the visual information set, and obtaining a third global graphic representation state corresponding to the key graphic sample of the local visual information after screening the key graphic samples of the visual information may further include the contents described in the following steps f1 and f 2.
And f1, judging whether the number of the key graphic samples screened from the visual information set exceeds the screening standard.
Step f2, if the screening criterion is exceeded, ending the screening process of the visual information set; and if the key graphic samples of the visual information do not exceed the screening standard, screening the key graphic samples of the visual information from the visual information set according to the arrangement sequence of the visual information set, and acquiring a third global graphic representation state corresponding to the key graphic samples of the local visual information after the key graphic samples of the visual information are screened.
It will be appreciated that when the above-described contents of step f1 and step f2 are executed, the third global graphic representation state can be accurately determined by the number of key graphic samples.
Based on the same inventive concept, the system further comprises an image acquisition end and a cloud platform, wherein the image acquisition end is in communication connection with the cloud platform, and the cloud platform is specifically used for:
respectively determining and obtaining the relevance of each key graphic sample of the sample image to be identified to the graphic representation state weight of the sample image to be identified in response to the key image feature vector of the sample image to be identified;
respectively acquiring the image visualization type of each key graph sample; wherein the image visualization type comprises visual information and non-visual information;
and determining talent image labels from the key graphic samples by using the image visualization types and the relevance of the key graphic samples.
Further, the cloud platform is specifically configured to:
acquiring a first global graph representation state of each key graph sample and a unit graph representation state of a target key graph sample;
screening the first global graph representation state based on the unit graph representation state of the target key graph sample to obtain a second global state of the local key graph sample after the target key graph sample is screened;
carrying out similarity judgment on the first global graph representation state and the second global state to obtain a graph representation state difference between the first global graph representation state and the second global graph representation state;
determining the relevance of the target key graph sample based on the graph characterization state difference.
Further, the cloud platform is specifically configured to:
matching the unit upload data volume with the global upload data volume to obtain a local upload data volume, and matching the unit success data volume with the global success data volume to obtain a local success data volume;
and obtaining the second global state based on a comparison result between the local upload data volume and the local success data volume.
Further, the cloud platform is specifically configured to:
acquiring image attribute data of each key graphic sample;
performing dimension reduction processing on the image attribute data based on the image attribute data type of each key graphic sample;
and acquiring a first global graph representation state of each key graph sample and a unit graph representation state of the target key graph sample based on the image attribute data after the dimension reduction processing.
Further, the cloud platform is specifically configured to:
acquiring key graphic samples of visual information and key graphic samples of non-visual information based on the image visualization type of each key graphic sample;
determining talent image labels from the key graphic samples of the non-visual information by utilizing a neural network training model;
and determining the talent image label from the key graph sample of the visual information by using an iterative screening method.
Further, the cloud platform is specifically configured to:
identifying a difference image identification result of the talent image tag;
and selecting a display mode corresponding to the difference image recognition result for difference display.
Further, the cloud platform is specifically configured to:
sequentially sequencing the relevance of the key graph samples of the non-visual information to obtain a relevance set of the key graph samples of the non-visual information;
judging whether the relevance set meets a normal distribution matrix;
if the relevance set meets a normal distribution matrix, determining a first difference interval through a neural network based on the relevance set, and determining a key graph sample corresponding to the relevance in the first difference interval as a talent image label of non-visual information;
and if the relevance set does not meet the normal distribution matrix, determining to obtain a second difference interval through a boxplot difference value method based on the relevance set, and determining the key graph sample corresponding to the relevance in accordance with the second difference interval as the talent image label of the non-visual information.
Further, the cloud platform is specifically configured to:
judging whether the global image identification result of the talent image label of the non-visual information exceeds a global threshold value or not;
if the global image identification result of the talent image label of the non-visual information exceeds the global threshold, determining that the difference image identification result of the talent image label of the non-visual information is a significant difference, and performing significant difference display on the talent image label of the non-visual information;
and if the global image identification result of the different non-visual information does not exceed the global threshold, determining that the difference image identification result of the talent image label of the non-visual information is a common difference, and performing common difference display on the talent image label of the non-visual information.
Further, the cloud platform is specifically configured to:
the key graphic samples of the visual information are sequentially prioritized based on the unit graphic representation state or relevance of the key graphic samples of the visual information to obtain a visual information set;
screening the key graphic samples of the visual information from the visual information set according to the arrangement sequence of the visual information set, and acquiring a third global graphic representation state corresponding to the local key graphic samples after the key graphic samples of the visual information are screened;
and carrying out similarity judgment on the third global graph representation state and the preset graph representation states of all the key graph samples so as to determine the talent image labels of the visual information.
Further, the cloud platform is specifically configured to:
and determining a key graph sample of the visual information corresponding to the third global graph representation state which is the same as the preset graph representation state as a talent image label of the visual information.
Further, the cloud platform is specifically configured to:
judging whether the number of key graphic samples screened from the visual information set exceeds a screening standard or not;
if the screening criteria are exceeded, ending the screening process of the visual information set; and if the key graphic samples of the visual information do not exceed the screening standard, screening the key graphic samples of the visual information from the visual information set according to the arrangement sequence of the visual information set, and acquiring a third global graphic representation state corresponding to the key graphic samples of the local visual information after the key graphic samples of the visual information are screened.
Based on the same inventive concept, please refer to fig. 3, which also provides a functional block diagram of a personnel image tag identification device 500, and the following detailed description of the personnel image tag identification device 500 is provided.
An talent image tag identification apparatus 500, applied to a cloud platform, the apparatus 500 comprising:
the image feature determining module 510 is configured to respectively determine, in response to obtaining a key image feature vector of a sample image to be identified, relevance of each key graphic sample of the sample image to be identified to a graphic representation state weight of the sample image to be identified;
an image type obtaining module 520, configured to obtain image visualization types of the key graphic samples respectively; wherein the image visualization type comprises visual information and non-visual information;
an image tag determining module 530, configured to determine a talent image tag from each key graphic sample by using the image visualization type and the relevance of each key graphic sample.
A cloud platform, comprising: the system comprises a processor, a memory and a network interface, wherein the memory and the network interface are connected with the processor; the network interface is connected with a nonvolatile memory in the intelligent equipment; the processor, when running, retrieves a computer program from the non-volatile memory via the network interface and runs the computer program via the memory to perform the method of any of fig. 2.
In summary, according to the method, the system and the cloud platform for identifying the talent image tag, after the key image feature vector of the sample image to be identified is obtained, the relevance of the graphic representation state weight of each key graphic sample to the sample image to be identified is respectively determined, and the talent image tag is determined from each key graphic sample based on the image visualization type and the relevance of each key graphic sample. Therefore, through the relevance of each key graph sample, namely the difference problem of the weight of each key graph sample to the actual sample image to be recognized, of the sample image to be recognized is analyzed, so that the accuracy of talent image label recognition is improved, and talent image labels are determined from different visual key graph samples of images based on the image visual types of the key graph samples, so that the pertinence of difference judgment is improved in consideration of different judgment modes of the image visual types of the key graph samples, and the accuracy of talent image label recognition is further improved.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (10)
1. A talent image tag identification method, comprising:
respectively determining and obtaining the relevance of each key graphic sample of the sample image to be identified to the graphic representation state weight of the sample image to be identified in response to the key image feature vector of the sample image to be identified;
respectively acquiring the image visualization type of each key graph sample; wherein the image visualization type comprises visual information and non-visual information;
and determining talent image labels from the key graphic samples by using the image visualization types and the relevance of the key graphic samples.
2. The method according to claim 1, wherein the step of determining the relevance of each key graphic sample of the sample image to be identified to the graphic representation state weight of the sample image to be identified respectively in response to acquiring the key image feature vector of the sample image to be identified comprises:
acquiring a first global graph representation state of each key graph sample and a unit graph representation state of a target key graph sample;
screening the first global graph representation state based on the unit graph representation state of the target key graph sample to obtain a second global state of the local key graph sample after the target key graph sample is screened;
carrying out similarity judgment on the first global graph representation state and the second global state to obtain a graph representation state difference between the first global graph representation state and the second global graph representation state;
determining the relevance of the target key graph sample based on the graph characterization state difference.
3. The personnel image tag identification method of claim 2, wherein the first global graphic representation state comprises a global upload data volume and a global success data volume, and the unit graphic representation state comprises a unit upload data volume and a unit success data volume; the step of screening the first global graph representation state based on the unit graph representation state of the target key graph sample to obtain a second global state of the local key graph sample after the target key graph sample is screened includes:
matching the unit upload data volume with the global upload data volume to obtain a local upload data volume, and matching the unit success data volume with the global success data volume to obtain a local success data volume;
and obtaining the second global state based on a comparison result between the local upload data volume and the local success data volume.
4. The method for identifying the image tag of the talent according to claim 2, wherein the step of obtaining the first global graphic representation state of each key graphic sample and the unit graphic representation state of the target key graphic sample further comprises:
acquiring image attribute data of each key graphic sample;
performing dimension reduction processing on the image attribute data based on the image attribute data type of each key graphic sample;
and acquiring a first global graph representation state of each key graph sample and a unit graph representation state of the target key graph sample based on the image attribute data after the dimension reduction processing.
5. The method for identifying the image tag of the talent according to claim 1, wherein the step of determining the image tag of the talent from the key graphic samples by using the image visualization type and the relevance of the key graphic samples comprises:
acquiring key graphic samples of visual information and key graphic samples of non-visual information based on the image visualization type of each key graphic sample;
determining talent image labels from the key graphic samples of the non-visual information by utilizing a neural network training model;
and determining the talent image label from the key graph sample of the visual information by using an iterative screening method.
6. The method for identifying image tags of talents according to claim 5, wherein after the step of determining image tags of talents from the key pattern samples of non-visual information by using the neural network training model and the step of determining image tags of talents from the key pattern samples of visual information by using the iterative screening method, the method further comprises:
identifying a difference image identification result of the talent image tag;
and selecting a display mode corresponding to the difference image recognition result for difference display.
7. The talent image tag identification method according to claim 6, wherein the step of determining a talent image tag from the key pattern sample of non-visual information using a neural network training model comprises:
sequentially sequencing the relevance of the key graph samples of the non-visual information to obtain a relevance set of the key graph samples of the non-visual information;
judging whether the relevance set meets a normal distribution matrix;
if the relevance set meets a normal distribution matrix, determining a first difference interval through a neural network based on the relevance set, and determining a key graph sample corresponding to the relevance in the first difference interval as a talent image label of non-visual information;
and if the relevance set does not meet the normal distribution matrix, determining to obtain a second difference interval through a boxplot difference value method based on the relevance set, and determining the key graph sample corresponding to the relevance in accordance with the second difference interval as the talent image label of the non-visual information.
8. The method according to claim 6, wherein the step of selecting a display mode corresponding to the difference image recognition result for the difference image recognition result of recognizing the difference image tag for the difference display comprises:
judging whether the global image identification result of the talent image label of the non-visual information exceeds a global threshold value or not;
if the global image identification result of the talent image label of the non-visual information exceeds the global threshold, determining that the difference image identification result of the talent image label of the non-visual information is a significant difference, and performing significant difference display on the talent image label of the non-visual information;
and if the global image identification result of the different non-visual information does not exceed the global threshold, determining that the difference image identification result of the talent image label of the non-visual information is a common difference, and performing common difference display on the talent image label of the non-visual information.
9. The talent image tag identification system is characterized by comprising an image acquisition end and a cloud platform, wherein the image acquisition end is in communication connection with the cloud platform, and the cloud platform comprises:
the image feature determination module is used for respectively determining the relevance of each key graphic sample of the sample image to be identified to the graphic representation state weight of the sample image to be identified in response to the key image feature vector of the obtained sample image to be identified;
the image type acquisition module is used for respectively acquiring the image visualization type of each key graphic sample; wherein the image visualization type comprises visual information and non-visual information;
and the image label determining module is used for determining talent image labels from the key graphic samples by using the image visualization types and the relevance of the key graphic samples.
10. A cloud platform, comprising:
a processor, and
a memory and a network interface connected with the processor;
the network interface is connected with a nonvolatile memory in the intelligent equipment;
the processor, when running, retrieves a computer program from the non-volatile memory via the network interface and runs the computer program via the memory to perform the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110496490.9A CN113191436A (en) | 2021-05-07 | 2021-05-07 | Talent image tag identification method and system and cloud platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110496490.9A CN113191436A (en) | 2021-05-07 | 2021-05-07 | Talent image tag identification method and system and cloud platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113191436A true CN113191436A (en) | 2021-07-30 |
Family
ID=76984035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110496490.9A Pending CN113191436A (en) | 2021-05-07 | 2021-05-07 | Talent image tag identification method and system and cloud platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113191436A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729818A (en) * | 2017-09-21 | 2018-02-23 | 北京航空航天大学 | A kind of multiple features fusion vehicle recognition methods again based on deep learning |
CN108304847A (en) * | 2017-11-30 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image classification method and device, personalized recommendation method and device |
CN108764350A (en) * | 2018-05-30 | 2018-11-06 | 苏州科达科技股份有限公司 | Target identification method, device and electronic equipment |
CN109344271A (en) * | 2018-09-30 | 2019-02-15 | 南京物盟信息技术有限公司 | Video portrait records handling method and its system |
CN110321875A (en) * | 2019-07-19 | 2019-10-11 | 东莞理工学院 | A kind of resume identification and intelligent classification screening system based on deep learning |
CN110457696A (en) * | 2019-07-31 | 2019-11-15 | 福州数据技术研究院有限公司 | A kind of talent towards file data and policy intelligent Matching system and method |
CN110909120A (en) * | 2018-09-14 | 2020-03-24 | 阿里巴巴集团控股有限公司 | Resume searching/delivering method, device and system and electronic equipment |
CN111368934A (en) * | 2020-03-17 | 2020-07-03 | 腾讯科技(深圳)有限公司 | Image recognition model training method, image recognition method and related device |
CN111967302A (en) * | 2020-06-30 | 2020-11-20 | 北京百度网讯科技有限公司 | Video tag generation method and device and electronic equipment |
CN112541490A (en) * | 2020-12-03 | 2021-03-23 | 广州城市规划技术开发服务部有限公司 | Archive image information structured construction method and device based on deep learning |
-
2021
- 2021-05-07 CN CN202110496490.9A patent/CN113191436A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729818A (en) * | 2017-09-21 | 2018-02-23 | 北京航空航天大学 | A kind of multiple features fusion vehicle recognition methods again based on deep learning |
CN108304847A (en) * | 2017-11-30 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image classification method and device, personalized recommendation method and device |
CN108764350A (en) * | 2018-05-30 | 2018-11-06 | 苏州科达科技股份有限公司 | Target identification method, device and electronic equipment |
CN110909120A (en) * | 2018-09-14 | 2020-03-24 | 阿里巴巴集团控股有限公司 | Resume searching/delivering method, device and system and electronic equipment |
CN109344271A (en) * | 2018-09-30 | 2019-02-15 | 南京物盟信息技术有限公司 | Video portrait records handling method and its system |
CN110321875A (en) * | 2019-07-19 | 2019-10-11 | 东莞理工学院 | A kind of resume identification and intelligent classification screening system based on deep learning |
CN110457696A (en) * | 2019-07-31 | 2019-11-15 | 福州数据技术研究院有限公司 | A kind of talent towards file data and policy intelligent Matching system and method |
CN111368934A (en) * | 2020-03-17 | 2020-07-03 | 腾讯科技(深圳)有限公司 | Image recognition model training method, image recognition method and related device |
CN111967302A (en) * | 2020-06-30 | 2020-11-20 | 北京百度网讯科技有限公司 | Video tag generation method and device and electronic equipment |
CN112541490A (en) * | 2020-12-03 | 2021-03-23 | 广州城市规划技术开发服务部有限公司 | Archive image information structured construction method and device based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107832662B (en) | Method and system for acquiring image annotation data | |
CN110569856B (en) | Sample labeling method and device, and damage category identification method and device | |
WO2023029353A1 (en) | Service data processing method and apparatus based on multi-modal hybrid model | |
US20170185913A1 (en) | System and method for comparing training data with test data | |
CN113221918B (en) | Target detection method, training method and device of target detection model | |
CN113762303B (en) | Image classification method, device, electronic equipment and storage medium | |
CN110705489A (en) | Training method and device of target recognition network, computer equipment and storage medium | |
CN112070093B (en) | Method for generating image classification model, image classification method, device and equipment | |
CN115935344A (en) | Abnormal equipment identification method and device and electronic equipment | |
CN114330588A (en) | Picture classification method, picture classification model training method and related device | |
CN114049631A (en) | Data labeling method and device, computer equipment and storage medium | |
JP6761197B2 (en) | Information processing system, information processing method, program | |
CN112270313A (en) | Online claims settlement method, device, equipment and storage medium | |
CN113191436A (en) | Talent image tag identification method and system and cloud platform | |
CN113723093B (en) | Personnel management policy recommendation method and device, computer equipment and storage medium | |
CN112651942B (en) | Layout detection method and device | |
CN113867608A (en) | Method and device for establishing business processing model | |
CN113284141A (en) | Model determination method, device and equipment for defect detection | |
CN113065010A (en) | Label image management method, label image management device, computer equipment and storage medium | |
KR102555733B1 (en) | Object management for improving machine learning performance, control method thereof | |
CN114118449B (en) | Image label identification method, medium and equipment based on bias label learning model | |
CN113920512B (en) | Image recognition method and device | |
US20240233426A9 (en) | Method of classifying a document for a straight-through processing | |
CN114005005B (en) | Double-batch standardized zero-instance image classification method | |
CN116090415A (en) | Method, device, computer device and storage medium for generating operation manual |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210730 |
|
WD01 | Invention patent application deemed withdrawn after publication |