CN111414609A - Object verification method and device - Google Patents

Object verification method and device Download PDF

Info

Publication number
CN111414609A
CN111414609A CN202010196376.XA CN202010196376A CN111414609A CN 111414609 A CN111414609 A CN 111414609A CN 202010196376 A CN202010196376 A CN 202010196376A CN 111414609 A CN111414609 A CN 111414609A
Authority
CN
China
Prior art keywords
labeling
confidence
result
verified
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010196376.XA
Other languages
Chinese (zh)
Other versions
CN111414609B (en
Inventor
田植良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010196376.XA priority Critical patent/CN111414609B/en
Publication of CN111414609A publication Critical patent/CN111414609A/en
Application granted granted Critical
Publication of CN111414609B publication Critical patent/CN111414609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication

Abstract

The embodiment of the application discloses an object verification method and device; the application provides an object verification method in the field of artificial intelligence natural language processing; the method includes the steps that target content of an object to be verified, which needs to be labeled, is displayed, an object labeling result of the object to be verified, which is labeled according to the target content, is obtained, an initial labeling confidence coefficient corresponding to the object labeling result is determined based on a predicted labeling result of the target content, the predicted labeling result is obtained by labeling the target content through a labeling model, the initial labeling confidence coefficient and a historical labeling confidence coefficient are fused, a target object confidence coefficient corresponding to the object to be verified is obtained, and the historical labeling confidence coefficient is a labeling confidence coefficient obtained by labeling the object to be verified according to the historical content; when the confidence coefficient of the target object meets a preset condition, determining that the object to be verified passes verification; the scheme can improve the safety of object verification.

Description

Object verification method and device
Technical Field
The application relates to the field of internet, in particular to an object verification method and device.
Background
With the development of internet technology, more and more daily operations are performed by means of the internet, and meanwhile, network security is more and more important. For example, when a user logs in a network account, whether an object currently performing a login operation is an unauthentic user may be determined by means of a verification code.
In the research and practice processes of the prior art, the inventor of the present application finds that the security of the object verification in this way needs to be improved due to the simple data judgment process and the fixed database.
Disclosure of Invention
The embodiment of the application provides an object verification method and device, which can improve the safety of object verification.
The embodiment of the application provides an object verification method, which comprises the following steps:
displaying target content to be marked of an object to be verified;
acquiring an object labeling result of the object to be verified for the target content labeling;
determining an initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content, wherein the predicted annotation result is obtained by adopting an annotation model to label the target content;
fusing the initial labeling confidence coefficient and the historical labeling confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical labeling confidence coefficient is a labeling confidence coefficient obtained by labeling the object to be verified aiming at historical contents;
and when the confidence coefficient of the target object meets a preset condition, determining that the object to be verified passes verification.
Accordingly, an embodiment of the present application provides an object verification apparatus, including:
the first display module is used for displaying target content to be marked of the object to be verified;
the acquisition module is used for acquiring an object labeling result of the object to be verified for the target content labeling;
the determining module is used for determining an initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content, wherein the predicted annotation result is obtained by adopting an annotation model to label the target content;
the fusion module is used for fusing the initial annotation confidence coefficient and the historical annotation confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical annotation confidence coefficient is an annotation confidence coefficient obtained by labeling the object to be verified aiming at historical content;
and the verification module is used for determining that the object to be verified passes the verification when the confidence coefficient of the target object meets a preset condition.
In some embodiments of the present application, the prediction labeling result includes at least two prediction results and a prediction confidence corresponding to each prediction result, the determining module includes a determining sub-module and an obtaining sub-module, wherein,
the determining submodule is used for determining a target prediction result which is the same as the object labeling result from the prediction labeling results;
and the obtaining sub-module is used for obtaining an initial labeling confidence coefficient corresponding to the object labeling result based on the prediction confidence coefficient corresponding to the target prediction result.
In some embodiments of the present application, the fusion module includes an acquisition sub-module and a weighting sub-module, wherein,
the obtaining submodule is used for obtaining a first weight corresponding to the initial labeling confidence coefficient and a second weight corresponding to the historical labeling confidence coefficient;
and the weighting submodule is used for weighting the initial identification confidence coefficient and the historical annotation confidence coefficient based on the first weight and the second weight to obtain a target object confidence coefficient corresponding to the object to be verified.
In some embodiments of the present application, the first display module includes a setup submodule, a determination submodule, and a display submodule, wherein,
the setting submodule is used for setting a marking difficulty coefficient for the target content of the object to be verified, which needs to be marked, based on the prediction marking result;
the determining submodule is used for determining the display condition of the target content according to the marking difficulty coefficient;
and the display sub-module is used for displaying the target content of the object to be verified, which needs to be marked, when the display condition is triggered.
In some embodiments of the present application, the prediction labeling result includes at least two prediction results and a prediction confidence corresponding to each prediction result, and the setting sub-module is specifically configured to:
determining the maximum confidence coefficient of the prediction labeling result from the prediction confidence coefficients of the prediction labeling result; and setting a labeling difficulty coefficient for the target content of the object to be verified to be labeled based on the maximum confidence.
In some embodiments of the present application, the object authentication apparatus further comprises:
the confidence coefficient module is used for fusing the confidence coefficient of the target object and the initial labeling confidence coefficient to obtain a labeling confidence coefficient corresponding to the object labeling result;
and the labeling result module is used for determining the labeling result of the target content based on the labeling confidence corresponding to the object labeling result.
In some embodiments of the present application, the annotation result module comprises a determination sub-module, a fusion sub-module, and a setting sub-module, wherein,
a determining sub-module, configured to determine an annotation result of the target content based on the annotation confidence of the object annotation result, where the determining sub-module includes:
the fusion submodule is used for fusing the labeling confidence degrees corresponding to the same object labeling results in the candidate labeling results to obtain at least one candidate confidence degree, wherein the candidate labeling results comprise at least two object labeling results of the target content and a labeling confidence degree corresponding to each object labeling result;
and the setting submodule is used for determining the maximum value in the candidate confidence degrees to obtain a target confidence degree, and setting an object labeling result corresponding to the target confidence degree as a labeling result of the target content.
In some embodiments of the present application, the object authentication apparatus further comprises:
and the second display module is used for determining that the object to be verified does not pass the verification when the confidence coefficient of the target object does not meet the preset condition.
In some embodiments of the present application, the object authentication apparatus further comprises:
the preprocessing module is used for preprocessing the target content to obtain words to be annotated;
the generating module is used for mapping the words to be labeled to a vector space and generating word vectors corresponding to the words to be labeled;
and the prediction module is used for inputting the word vectors into a labeling model for labeling to obtain a prediction labeling result, and the prediction labeling result comprises at least two prediction results and a prediction confidence corresponding to each prediction result.
Correspondingly, the embodiment of the present application further provides a storage medium, where a computer program is stored, and the computer program is suitable for being loaded by a processor to execute any one of the object verification methods provided in the embodiment of the present application.
Accordingly, embodiments of the present application further provide a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements any one of the object verification methods provided in the embodiments of the present application when executing the computer program.
The method includes the steps of firstly displaying target content of an object to be verified, which needs to be labeled, obtaining an object labeling result of the object to be verified, which is labeled according to the target content, then determining an initial labeling confidence coefficient corresponding to the object labeling result based on a predicted labeling result of the target content, wherein the predicted labeling result is obtained by labeling the target content by using a labeling model, fusing the initial labeling confidence coefficient and a historical labeling confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical labeling confidence coefficient is a labeling confidence coefficient obtained by labeling the object to be verified according to historical content, and finally determining that the object to be verified passes verification when the target object confidence coefficient meets a preset condition.
According to the scheme, the object labeling result is not directly used as data for data judgment, but converted into the judgment confidence coefficient, the confidence coefficient of the target object is obtained by combining the confidence coefficient of the current operation (namely the initial labeling confidence coefficient) and the confidence coefficient of the historical operation (namely the historical labeling confidence coefficient), and the object is verified through the confidence coefficient of the target object.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a scene of an object verification apparatus provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an object verification method provided in an embodiment of the present application;
FIG. 3 is a general flowchart diagram providing an example of the overall process of performing object verification and annotation processing according to an embodiment of the present application;
fig. 4 is another schematic flowchart of an object verification method provided in an embodiment of the present application;
FIG. 5 is a diagram of an example of an interactive interface for object verification as provided by an embodiment of the present application;
FIG. 6 is a diagram illustrating a neural network model according to an embodiment of the present disclosure;
FIG. 7 is an exemplary graph of confidence levels for a generated target object provided by embodiments of the present application;
fig. 8 is a schematic structural diagram of an object verification apparatus according to an embodiment of the present application;
fig. 9 is another schematic structural diagram of an object verification apparatus according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a computer device provided in an embodiment of the present application;
fig. 11 is an alternative structural diagram of the distributed system 110 applied to the blockchain system according to the embodiment of the present application;
fig. 12 is an alternative schematic diagram of a block structure provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the embodiments described in the present application are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
Natural language processing (N L P) is an important direction in the fields of computer science and artificial intelligence, and it is a research on various theories and methods that can realize effective communication between people and computers using natural language.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The process of obtaining a predicted annotation result through an annotation model provided by the embodiment of the application relates to artificial intelligence natural language processing and other technologies, and is specifically described through the embodiment.
The embodiment of the application provides an object verification method and device. Specifically, the embodiment of the present application may be integrated in an object verification apparatus, and the object verification apparatus may be integrated in an object verification computer device, where the object verification computer device may be an electronic device such as a terminal, and the terminal may be an electronic device such as a smart phone, a tablet computer, a notebook computer, and a personal computer, as shown in fig. 1, fig. 1 is a scene schematic diagram of the object verification apparatus provided in the embodiment of the present application. Wherein the terminal may be as shown in fig. 1.
The terminal can display target content to be labeled of an object to be verified, obtain an object labeling result of the object to be verified for labeling the target content, and determine an initial labeling confidence corresponding to the object labeling result based on a predicted labeling result of the target content, wherein the predicted labeling result is obtained by labeling the target content by using a labeling model; and fusing the initial annotation confidence coefficient and the historical annotation confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical annotation confidence coefficient is an annotation confidence coefficient obtained by the object to be verified by labeling the historical content, and when the target object confidence coefficient is greater than a preset threshold value, determining that the object to be verified passes verification.
The object verification computer device may also be an electronic device such as a server, and the server may have a data processing function, a data storage function, a data transmission function, and the like, for example, a cloud server, a mirror server, an origin server, and the like, and the server may be a single server or a server cluster. The server may be, as shown in fig. 1, mainly configured to receive a request message of a terminal, and send data such as a prediction tagging result and a historical tagging confidence to the terminal according to the request message, for example, the server may receive the request message sent by the terminal, the request message may be used to request the prediction tagging result of target content, the server may obtain the prediction tagging result of the target content through a tagging model after receiving the request message, and send the obtained prediction tagging result to the terminal, and the like.
It should be noted that the scene schematic diagram of the object verification apparatus shown in fig. 1 is merely an example, and the object verification apparatus and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application.
The following are detailed below.
In the present embodiment, the object authentication apparatus will be described in terms of an object authentication apparatus, which may be integrated in a terminal, for example, a terminal such as a smart phone, a tablet computer, a notebook computer, a personal computer, and a wearable smart device, which has a storage unit and a microprocessor.
As shown in fig. 2, fig. 2 is a schematic flowchart of an object verification method according to an embodiment of the present application. The object authentication method may include:
101. and displaying the target content of the object to be verified, which needs to be marked.
The object to be verified may include an object to be subjected to a labeling operation to verify whether the object is an unreal user, where the real user is a real living being, and the unreal user may include a robot controlled by the real living being, a pre-programmed program, and the like. On the internet, a non-real user usually initiates a malicious attack aiming at a network account of the real user, an internet product (such as a mailbox, a search engine and the like) taking an application program and the like as a carrier and the like, determines whether an object to be verified is a main task of object verification, and can avoid subsequent operations of the non-real user and avoid infringement on the network account and the internet product through the object verification.
The labeling refers to manual processing of data, the processing mode can be various, for example, the data can be annotated, modified, audited, or classified, the data can be in various forms such as text and picture, the labeled data carries information obtained by labeling, and the existing labeling mode can include that a labeling person processes the data to be labeled through a labeling website or a labeling program for compensation. For example, the data to be annotated may be a picture, and the manual processing may be a process in which the annotating personnel determines whether the picture content includes a shop signboard.
The target content may include data to be annotated that needs to be annotated, and the target content may include multiple display forms, for example, the display form of the target content may be a picture, a video, or a text, and for an unreal object, the information amount of the picture or the video is much higher than that of the text, and the difficulty of correctly annotating the picture or the video is much higher than that of the text, so that, in order to improve the security of object identification, the display form of the content to be annotated is more in the form of the picture or the video.
The form of the data to be labeled can include various forms such as texts or pictures, and when the form of the data to be labeled is different from that of the target content, certain format conversion needs to be performed on the data to be labeled, for example, if the data to be labeled is a text and the target content is a picture, the text needs to be subjected to picture processing; when the form of the data to be annotated is the same as that of the target content, but the form of the data to be annotated does not meet the form required by the target content, certain processing needs to be performed on the data to be annotated, for example, if the target content is a picture, and the picture includes a group of pictures to be annotated, the group of pictures to be annotated needs to be spliced to obtain the target content, and the like. The process of processing the data to be annotated to obtain the target content can be flexibly performed in an actual scene, and specific operations can also include multiple types, which are not described herein again.
The data to be labeled can be a text, the label for the text can include part-of-speech analysis, emotion analysis, word meaning judgment and the like, the part-of-speech analysis can include analysis of the part-of-speech of a word, the word can be a sentence formed by a plurality of words, or a single word, for example, the part-of-speech of the word in "i love bananas" is analyzed, and the obtained result can be "i (noun) love (verb) bananas (nouns)"; the emotion analysis may be to judge the emotion of a word or a sentence, and the emotion may include positive direction, negative direction, neutral direction, and the like, for example, the emotion of the sentence "i is happy" may be analyzed, and the obtained result may be "i is happy (positive direction)", or the emotion of the word in the sentence "i is happy (positive direction)" may be analyzed, and the obtained result may be "i (neutral) is happy (positive direction)".
Word sense determination may include a variety of, for example, it may be determined whether the meanings of a group of words are similar or opposite, i.e. a synonym or an antisense, for example, it may be analyzed whether "large" and "small" are synonyms, and the result may be "no"; for another example, whether a group of words is a similar word may be determined, where the similar word may be a word including at least one type of same feature, and the same feature may include that the words are similar in meaning (in this case, the similar word is a similar word); the same characteristics may also include semantic role identity of words, i.e. play the same role in sentences, such as "very", "very" or "a little" may all be adverbs to the extent in sentences; the same feature may also include the same attribute of words, such as "orange" and "mangosteen" both being fruit names, and so on.
The target content to be marked of the object to be verified is displayed, which is a starting point step for object verification, and lays a foundation for realizing the object verification.
For example, the settings of application a are: before entering the payment page, an object to be paid (i.e., an object to be verified) needs to be verified, and a thumbnail of the application program a is to be paid on the application program a, then the application program displays a picture (i.e., target content) to be annotated on the annotation page before displaying the payment page.
In some embodiments, in order to more effectively and reasonably display the target content, the predicting and labeling result is obtained by labeling the target content by using a labeling model, and the step "displaying the target content to be labeled of the object to be verified" may include:
setting a marking difficulty coefficient for target content to be marked of the object to be verified based on the prediction marking result; determining the display condition of the target content according to the marking difficulty coefficient; and when the display condition is triggered, displaying the target content of the object to be verified, which needs to be marked.
The predicted labeling result may include a labeling result obtained by predictively labeling the target content, and the predicted labeling result may be performed by using a labeling model.
Wherein, mark the degree of difficulty that the degree of difficulty coefficient can express the target content by the correct mark quantitatively, mark the degree of difficulty coefficient and can express through multiple forms including digit, sign, picture, or characters etc. before setting up the degree of difficulty coefficient for the target content, can set up the relation that degree of difficulty and degree of difficulty coefficient expressed the form according to the demand, if, can use and express the degree of difficulty coefficient with the figure form to the relation that sets up degree of difficulty and figure is: the greater the number (i.e., the greater the difficulty factor), the lower the difficulty; for another example, the difficulty coefficient may be expressed in a symbol (or picture) form, and at this time, the relationship between different symbols (or pictures) and difficulty may be set based on a certain rule, for example, if the difficulty coefficient is set to four steps, the symbol of each step is set as: + (simplest)! (simpler), & (difficult), # (hardest), etc.
The display conditions may include all necessary conditions that the target content may be displayed, and the necessary conditions may include time, an object to be verified, a display form of the target content, and the like, and the display conditions quantize the display target content into determined data, thereby ensuring accurate and efficient display of the target content, for example, the display conditions of the target content may be a whiter object to be verified, a third object verification from 5 days and 6 hours after the display time is shorter, and the display form of the target content is a video with a duration of 2 seconds.
The predicted labeling result can be obtained by predictively labeling the target content, and based on the predicted labeling result, the labeling difficulty coefficient of the target content can be determined, the labeling difficulty can be changed along with the change of the data to be labeled, the labeling task and the labeling object, so that the labeling result obtained by labeling is not necessarily accurate, therefore, the labeling difficulty coefficient is required to be set for the target content by predicting the labeling result, and can quantify the difficulty of labeling the target content,
then, the display condition of the target content is determined according to the labeling difficulty coefficient, the object verification is mainly used for determining that the object is a real user, the labeling task with lower difficulty can be set on the premise of ensuring that the object verification task is present, and the object can complete verification quickly, so that if the labeling difficulty of the target content is higher, the display condition of the target content is stricter, namely the probability of displaying the target content is lower, and when the display condition is triggered, the target content to be marked of the object to be verified can be displayed.
For example, if it is determined that the difficulty of the target content is high by predicting the annotation result, the difficulty coefficient is set to 5, which is the highest difficulty level, for the target content, and then, according to the difficulty coefficient, the display condition of the target content is determined (when any object to be verified performs 51 th-time object verification within 3 hours, the target content is displayed), and then, when the object to be verified is small white, the 51 st-time object verification from 5 days 6 is performed within 20 minutes at 5 days 8, the target content is displayed on the small white application program object verification page.
In some embodiments, the step "setting an annotation difficulty coefficient for target content to be annotated of the object to be verified based on the predicted annotation result" may include:
determining the maximum confidence coefficient of the prediction labeling result from the prediction confidence coefficients of the prediction labeling result; and setting a labeling difficulty coefficient for the target content of the object to be verified to be labeled based on the maximum confidence.
The confidence coefficient can include measurement of correctness of the labeling result, the higher the confidence coefficient is, the higher the possibility that the labeling result corresponding to the confidence coefficient is the correct labeling result is, the reliability of the labeling result can be quantitatively measured through the confidence coefficient, the precision of data used by object verification is improved, and the result of the object verification is more accurate.
Specifically, the prediction labeling result includes at least two prediction results and a prediction confidence corresponding to each prediction result, the prediction confidence, that is, the probability that the corresponding prediction result is the correct labeling result, may compare the prediction confidences corresponding to all the prediction results to determine the maximum value therein, the prediction results of the target content may be at least two, the sum of all the prediction confidences is one unit, when the difference between the prediction confidences corresponding to the prediction results is smaller, that is, the probability that each prediction result is the correct labeling result is similar, the prediction effect on the correct labeling result is poor, and the maximum value of the prediction confidence is relatively lower at this time; conversely, when the difference between the prediction positions corresponding to the prediction results is larger, that is, the probability that the prediction result corresponding to the maximum value of the prediction confidence is the correctly labeled result is significantly higher than that of other prediction results, the prediction effect of the correctly labeled result by the prediction labeled result is better.
Then, based on the maximum confidence, a labeling difficulty coefficient can be set for the target content, the same labeling model is used for predictive labeling, and the target content with better prediction effect is relatively simpler, so that the maximum confidence can measure the difficulty of the target content to a certain degree, meanwhile, the difference between the maximum value and the adjacent value can also measure the difficulty of the target content, the number of prediction results, the numerical range of the prediction confidence and other factors can influence the weight occupied by the difference when the difficulty of the target content is measured, and therefore, the problems of whether the difference is introduced and how the weight of the difference is introduced can be specifically analyzed and judged when the difference is applied, and the like, and the method is not limited herein.
For example, the prediction result of the target content is a and B, the confidence of a is 0.3, the confidence of B is 0.7, the maximum confidence of the target content can be determined to be 0.7, and then, the annotation difficulty coefficient 3 can be set for the target content according to the maximum confidence (the annotation difficulty coefficient includes 1, 2, 3, and 4, wherein the greater the number, the higher the difficulty).
102. And acquiring an object labeling result of the object to be verified for the target content labeling.
The annotation result may include a result obtained by annotating the target content, and the content and form of the annotation result depend on the target content and the setting of the annotation mode for the target content. For the text annotation, the form of the annotation result is usually a character, and the content can be directly "forward", "antisense", or "noun", etc.; the result may be indirect, such as "yes", "no", or "error", or the result may be in a non-text form, for example, the result may be a marked picture or video, or the result may be an operation performed on the picture or video.
For example, in order to improve the security of object labeling, when performing word emotion labeling, target content may be included in one picture, and the picture may include a plurality of words including the target content, and when performing labeling, positive, negative, and neutral words in the picture need to be respectively labeled in the picture, and then the form of the labeling result is a picture including an emotion labeling result, and an emotion labeling result about a word needs to be obtained by combining position information of the word in the picture used when the picture is made; for another example, more simply and quickly, the user can directly make the judgment of the labeling result through the control, for example, when the synonym is judged, the target content can be directly displayed in a problem form, and the problem can be: is "? "and displays the answers to the questions in the form of buttons, which may be a" yes "button and a" no "button; besides, the annotation content can be obtained by inputting the annotation result by self, and the like.
The object labeling result can include a labeling result generated by labeling the object to be verified aiming at the target content, the object labeling result of the object to be verified aiming at the target content is obtained, the object labeling result is an important step for completing the object verification, and whether the object passes the verification or not is determined by analyzing the object labeling result.
For example, if the user caption marks the picture on the verification page of the application a, the application a may receive the object marking result of the user caption.
103. And determining an initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content, wherein the predicted annotation result is obtained by adopting an annotation model to label the target content.
The predicted labeling result can be obtained by labeling the target content by using a labeling model, and the labeling model is any model capable of being labeled. The initial labeling confidence coefficient is an initial measurement of the possibility that the object labeling result is a correct labeling result.
There are many ways to determine the initial annotation confidence corresponding to the object annotation result based on the predicted annotation result, and flexible adjustment can be performed according to the form and content of the predicted annotation result, for example, by using a trained determination model, or performing necessary mathematical processing, etc.,
the method is characterized in that the initial labeling confidence of the object labeling result is determined by predicting the labeling result, and is different from the prior art in that the labeling result is directly compared with the correct labeling result, the result obtained by comparing the labeling result with the correct labeling result is used as the standard of object verification, the object labeling result is converted into the labeling confidence in the scheme, and the labeling confidence is a measure for the possibility that the object labeling result is the correct labeling result.
For example, according to the predicted labeling result obtained by the labeling model, the initial labeling confidence of the small-white object labeling result can be determined to be 0.6.
In some embodiments, the step of determining an initial annotation confidence corresponding to the annotation result of the object based on the predicted annotation result of the target content may include:
determining a target prediction result which is the same as the target labeling result from the prediction labeling results; and acquiring an initial labeling confidence corresponding to the object labeling result based on the prediction confidence corresponding to the target prediction result.
The prediction labeling result comprises at least two prediction results and a prediction confidence degree corresponding to each prediction result, the target prediction result is the same as the target labeling result, and the prediction results correspond to the prediction confidence degrees, so that the initial labeling confidence degree corresponding to the target labeling result can be obtained based on the prediction confidence degree corresponding to the target prediction result, and specifically, the prediction confidence degree can be directly set as the initial labeling confidence degree; certain models can also be introduced to take the prediction confidence as input, output the initial annotation confidence, and the like.
The embodiment provides a method for determining the confidence of the initial annotation more simply and conveniently, and the method is favorable for realizing the object verification.
For example, the prediction labeling result includes a prediction result a and a prediction confidence thereof 0.6, a prediction result B and a prediction confidence thereof 0.2, and a prediction result C and a prediction confidence thereof 0.2, and the object labeling result is compared with A, B and C, and if it is determined that the object labeling result is the same as C, an initial labeling confidence of the object labeling result is obtained as 0.22 based on the prediction confidence of C0.2.
104. And fusing the initial annotation confidence coefficient and the historical annotation confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical annotation confidence coefficient is the annotation confidence coefficient obtained by the object to be verified by labeling the historical content.
The historical annotation confidence coefficient can be an annotation confidence coefficient obtained by annotating the object to be verified aiming at the historical content, the historical content is the historical target content to be annotated, the historical annotation confidence coefficient can be a group of data, namely the annotation confidence coefficients of all the historical annotations of the object to be annotated, at the moment, all original data are stored, the contained information content is larger, and more fusion modes can be selected based on the annotation confidence coefficients; the historical annotation confidence may also be obtained based on the annotation confidence of all the historical annotations, at this time, the original data is processed to a certain extent, the burden of data storage may be reduced, and a processing manner is selected according to the requirement, and important features of the annotation confidence of all the historical annotations are retained, for example, the annotation confidence of all the historical annotations may be averaged to obtain an average value as the historical annotation confidence, and the like.
The initial annotation confidence and the historical annotation confidence are fused to obtain the target object confidence of the object to be verified, the fusion mode can be various, other balance parameters can be set in the fusion process, the fusion process is more reasonable, the specific fusion mode and the setting of the parameters can be flexibly selected according to the actual verification scene, and the method is not limited herein. The historical annotation confidence of the object to be verified is introduced, whether the object to be verified can pass verification or not can be determined through the historical operation of the object to be verified and the current operation of the object to be verified, and the safety of object verification is remarkably improved.
For example, if the historical annotation confidence of the verification page of the user's whitish before the payment page is 0.7 and the initial annotation confidence is 0.6, then 0.6 and 0.7 can be fused based on a certain mode to obtain the target object confidence of the whitish 0.68.
In some embodiments, the step of fusing the initial annotation confidence and the historical annotation confidence to obtain a target object confidence corresponding to the object to be verified may include:
acquiring a first weight corresponding to the initial labeling confidence coefficient and a second weight corresponding to the historical labeling confidence coefficient; and based on the first weight and the second weight, carrying out weighting processing on the initial identification confidence coefficient and the historical annotation confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified.
Specifically, because the use scenarios of object verification are different, the importance degree of the current operation and the historical operation on the object verification may be different, so a first weight and a second weight may be respectively set for the initial annotation confidence and the historical annotation confidence, and weighting processing is performed according to the weights, so that the influence of the initial annotation confidence and the historical annotation confidence on the target object confidence of the object to be verified can be effectively changed. For example, for more frequent object verification with higher possibility of being attacked, such as object verification before account login page, object verification before payment page, etc., the object labeling result obtained by the current operation can be more emphasized, and the first weight of the initial labeling confidence coefficient can be set to be larger; for object verification, which requires general security within an account such as sending a message, a second weight of historical annotation confidence may be increased, and so on.
For example, setting the weight of the historical annotation confidence level 0.7 to 0.2, setting the weight of the initial annotation confidence level 0.6 to 0.8, and performing simple weighted summation on the two to obtain the target object confidence level of 0.62.
105. And when the confidence coefficient of the target object meets the preset condition, determining that the object to be verified passes the verification.
The setting condition may include a condition for determining whether the to-be-verified corresponding to the confidence of the target object can be verified, and the setting condition may include a setting threshold, a setting factor, and the like, for example, a range of the setting threshold should be the same as a range of the confidence of the target object, selecting a proper setting threshold is crucial to completing the verification of the object, a conclusion obtained by the verification of the object may not be in accordance with an actual situation due to too high or too low setting of the setting threshold, and the setting of the setting threshold needs to be determined by integrating an actual application scenario, a distribution condition of the confidence of the target object, and the like, so as to finally obtain a more reasonable setting threshold.
When the confidence of the target object meets the preset condition, the object to be verified can be determined to pass the verification, and the process of object verification can be completed.
For example, if the preset threshold is 0.58, the target object confidence of the small white is 0.62 and is greater than 0.58, and the small white passes the verification, the application a of the small white may display the payment page.
In some embodiments, the object labeling method may further include the steps of:
and when the confidence coefficient of the target object does not meet the preset condition, determining that the object to be verified does not pass the verification.
The target content can be data to be labeled, and in order to fully utilize the data, if the current object to be verified does not pass verification, the target data can be distributed to a new object to be verified and displayed on a verification page of the new object to be verified, so that the target data can be labeled and a reliable labeling result can be obtained.
For example, if the preset threshold is 0.65, the confidence 0.62 of the small white target object is less than 0.65, the small white to be verified cannot pass verification, and the terminal displays the target content on a small green verification page of the new object to be verified.
In some embodiments, the object labeling method may further include the steps of:
fusing the confidence coefficient of the target object and the initial labeling confidence coefficient to obtain a labeling confidence coefficient corresponding to the labeling result of the object; and determining the labeling result of the target content based on the labeling confidence corresponding to the object labeling result.
In this embodiment, the confidence of the target object needs to pass verification, that is, it is determined that the object to be verified performing the labeling operation is a real object, in order to implement data labeling, and complete the process of data labeling to obtain a reliable labeling result, the confidence of the target object and the initial labeling confidence can be fused to obtain the labeling confidence of the object labeling result object, the influence of the operation subject (the object to be labeled) and the operation (labeling) on the labeling result is fused to obtain the labeling confidence of the object labeling result, and this operation can more comprehensively measure the reliability of the object labeling result, and then, the labeling result of the target content is determined based on the object labeling result and the corresponding labeling confidence thereof.
For example, after the small white passes the verification, the target object confidence coefficient 0.62 and the initial labeling confidence coefficient 0.6 are fused to obtain the labeling confidence coefficient 0.7 of the small white object labeling result, and then the labeling result of the target content is determined according to the labeling confidence coefficients 0 and 7.
In some embodiments, the step "determining an annotation result of the target content based on the annotation confidence corresponding to the object annotation result" may include:
fusing the labeling confidence degrees corresponding to the same object labeling results in the candidate labeling results to obtain at least one candidate confidence degree, wherein the candidate labeling results comprise at least two object labeling results of the target content and a labeling confidence degree corresponding to each object labeling result; and determining the maximum value in the candidate confidence degrees to obtain a target confidence degree, and setting an object labeling result corresponding to the target confidence degree as a labeling result of the target content.
In actual operation, a target content can be allocated to a plurality of objects to be verified, the objects to be verified are labeled to obtain respective object labeling results, after the object verification is completed, the objects to be verified and the corresponding target object confidence coefficients thereof and the initial labeling confidence coefficients corresponding to the object labeling results are stored, and the confidence coefficients therein are fused to obtain the labeling confidence coefficients corresponding to the object labeling results, finally, one target content can have a plurality of groups of object labeling results, and each object labeling result corresponds to one labeling confidence coefficient.
In order to obtain a more accurate labeling result for the target content, all the obtained object labeling results and the corresponding labeling confidence degrees thereof can be integrated and compared, the same parts in all the object labeling results are firstly determined, the labeling confidence degrees corresponding to the same object labeling results are integrated to obtain candidate confidence degrees, and then the maximum value of the candidate confidence degrees is determined, namely the probability that the object labeling result corresponding to the maximum value is the correct labeling result is the highest, namely the object labeling result is taken as the labeling result of the target content, namely the process of data labeling is completed.
For example, the target content has 4 sets of object labeling results, where each pair of object labeling results corresponds to a labeling confidence, which is, the object labeling result 11 and its confidence 0.7, the object labeling result 12 and its confidence 0.9, the object labeling result 13 and its confidence 0.65, and the object labeling result 14 and its confidence 0.7, and after the judgment and integration, two candidate confidences are obtained, which are, respectively, the first candidate confidence 2.05 (the sum of the labeling confidences corresponding to the object labeling results 11, 13, and 14) and the first candidate confidence 0.7 (the labeling confidence corresponding to the object labeling result 12), and then the first candidate confidence 2.05 is the maximum value, and the corresponding object labeling result is determined as the labeling result of the target content.
In some embodiments, the object labeling method may further include the steps of:
preprocessing the target content to obtain words to be annotated; mapping the words to be labeled to a vector space to generate word vectors corresponding to the words to be labeled; and inputting the word vectors into a labeling model for labeling to obtain a prediction labeling result, wherein the prediction labeling result comprises at least two prediction results and a prediction confidence corresponding to each prediction result.
Before generating a prediction labeling result of target content through a labeling Model, the target content can be preprocessed to make the format of the target content conform to a standard input format of the labeling Model, specifically, the target content can be preprocessed to obtain a plurality of Words, for example, the target content is a picture, the picture contains a sentence "i eat rice dumpling", then the "i eat rice dumpling" needs to be processed into "i", "eat" and "rice dumpling", the preprocessing can obtain a plurality of Words through manual processing or computer processing, and then the Words need to be mapped to a vector space to obtain a vector expression (i.e. word vector), the process can convert the Words into a vector form which can be identified and understood by a word computer, so that a subsequent prediction process is facilitated, and the conversion of the Words into the word vector can be completed based on a word Bag Model (CBOW, continuos Bag-of-ds Model) and the like, in some cases, sentences may also be converted into vector form, and the conversion of sentences into Vectors may be performed based on a Distributed Memory Model of Paragraph Vectors (PV-DM), such as Paragraph Vectors, for example, the words "i" may be converted into word Vectors a by a bag-of-words Model.
Then, the word vector is input into a labeling model for labeling to obtain a predicted labeling result, and the labeling model is flexibly selected according to actual use requirements, for example, a neural network model can be set according to the requirement and trained, and the training process can include inputting the data to be labeled containing the labeling result (i.e. real value), comparing the output result (namely the predicted value) of the model with the input labeling result, adjusting the parameters in the neural network model based on the comparison result, then the process of outputting the result and adjusting the parameters is repeated again until the parameters meeting the requirements are obtained, the loss is low, the gradient is not reduced, the error between the true value and the predicted value is reduced, a trained neural network model (namely a set model) can be obtained, and then the labeling can be completed through the set model, so that a prediction labeling result is obtained. For example, the word vector a is input into the tagging model, and the prediction model can perform part-of-speech tagging, so that the part-of-speech of the word vector a can be obtained through the tagging model.
The method includes the steps of firstly displaying target content of an object to be verified, which needs to be labeled, obtaining an object labeling result of the object to be verified, which is labeled according to the target content, then determining an initial labeling confidence coefficient corresponding to the object labeling result based on a predicted labeling result of the target content, wherein the predicted labeling result is obtained by labeling the target content by using a labeling model, fusing the initial labeling confidence coefficient and a historical labeling confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical labeling confidence coefficient is a labeling confidence coefficient obtained by labeling the object to be verified according to historical content, and finally determining that the object to be verified passes verification when the target object confidence coefficient meets a preset condition.
According to the scheme, the object labeling result is not directly used as data for data judgment, but converted into the judgment confidence coefficient, the confidence coefficient of the target object is obtained by combining the confidence coefficient of the current operation (namely the initial labeling confidence coefficient) and the confidence coefficient of the historical operation (namely the historical labeling confidence coefficient), and the object is verified through the confidence coefficient of the target object.
The method described in the above embodiments is further illustrated in detail by way of example.
As shown in fig. 3, fig. 3 is a general flowchart of an optional process for completing object verification and annotation, in which data to be annotated is stored in a database, when a terminal selects target data from the database, the target data is converted into target content that can be displayed on an object to be annotated by a data generation module, a tagging platform can display the target content and collect a user tagging result (i.e., an object tagging result) of the object to be verified, a user determination module determines whether the object to be verified is a real user (i.e., whether verification is passed) according to a predicted tagging result, the object tagging result, and a historical tagging confidence, and obtains a probability that the object to be verified is a real user (i.e., a target object confidence), when the object to be verified is not a real user, the object tagging result is discarded, and if the object to be verified is a real user, based on an initial tagging confidence and a target object confidence obtained by an automatic tagging module, and obtaining the final confidence of the object labeling result, and storing the object labeling result and the corresponding final confidence thereof in a database.
As shown in fig. 4, fig. 4 is a schematic flow chart of an object verification updating method according to an embodiment of the present application.
The object authentication method may include:
201. and the terminal displays the target content to be marked of the object to be verified.
The target content is data to be labeled, the data to be labeled can be collected according to a group with data labeling requirements, and the collection can be carried out through a website, an application program and the like. After the data collection is completed, the data can be processed, a difficulty coefficient is set for the data to be labeled according to the prediction labeling result of the labeling model, and the data to be labeled is stored according to the difficulty coefficient.
When the target content is determined, a random sampling mode can be adopted, and for the data to be labeled with high difficulty coefficient, lower weight is distributed during sampling, so that the possibility of being sampled is lower. After the target content is determined from the data to be labeled, if the target content is in a text form, the target content can be converted into a picture form, and then some decorations are added on the picture, or the text is displayed on the picture in the forms of artistic words or exaggerated fonts and the like, so that the difficulty of machine identification is improved, and the safety of object authentication is enhanced.
And after the target content in the form of the picture is obtained, the picture can be displayed on the electronic equipment of the object to be marked.
For example, the terminal has an application program, the user xiaoyan of the application program is an object to be labeled, the application program displays target content to be labeled of xiaoyan on the verification page, referring to fig. 5, the target content is a picture, and characters "whether the following words are similar words or not" are displayed in the picture, and the text also includes buttons "yes" and "no" for inputting a labeling result.
202. And the terminal acquires an object labeling result of the object to be verified for labeling the target content.
For example, after the small purple is labeled, the labeling result of the object obtained by the terminal is yes.
203. The terminal labels the target content through the labeling model to obtain a prediction labeling result, wherein the prediction labeling result comprises at least two prediction results and a prediction confidence corresponding to each prediction result.
The terminal may use a trained Neural Network model (i.e., a labeling model) to label and obtain a predicted labeling result of the target content, and the structure of the trained Neural Network model may be as shown in fig. 6, and includes a word vector layer (wordledding), a Recurrent Neural Network layer (RNN), and a prediction probability layer for generating a result, and outputs the prediction result through the trained Neural Network model. The generation of the predictive probability layer for the result can be implemented by a function, such as the softmax function. In the training stage, the neural model is trained by inputting data to be labeled with a labeling result, and parameters are continuously adjusted until the neural model can use appropriate parameters to output a prediction labeling result meeting requirements. The trained neural model receives a plurality of words and outputs prediction labeling results of the words, and the prediction labeling results comprise prediction results and prediction confidence degrees corresponding to the prediction results.
For example, using the trained neural model to label whether the size of the word is a similar meaning word, a prediction labeling result can be obtained: yes (confidence 0.15) and no (confidence 0.84).
204. And the terminal determines an initial labeling confidence corresponding to the target labeling result based on the prediction labeling result of the target content.
For example, the terminal labels, according to the prediction: yes (confidence 0.15) and no (confidence 0.84), the initial labeling confidence of the object labeling result "yes" for the small purple is determined to be 0.15.
205. And the terminal fuses the initial labeling confidence coefficient and the historical labeling confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical labeling confidence coefficient is the labeling confidence coefficient obtained by labeling the object to be verified aiming at the historical content.
Specifically, as shown in fig. 7, the historical annotation confidence may be an average of the confidence of the previously annotated samples of the user, and then the initial annotation confidence of the annotated result of the object is obtained according to the predicted annotation result of the annotation model, that is, in fig. 7, the sample (i.e., the target content) annotated by the user is input into the automatic annotation module (the predicted annotation result is obtained), so as to obtain the confidence of the annotated sample (the initial annotation confidence of the annotated result of the object is obtained by predicting the annotation result, and then the target annotation confidence is obtained based on the historical annotation confidence and the initial annotation confidence, and the target annotation confidence may be an average of the confidences of all the annotated samples of the user.
For example, the confidence of the historical annotation of the small purple is 0.3, and the confidence of the target object of the small purple is obtained by the confidence of the initial annotation of the small purple being 0.15 and the confidence of the historical annotation being 0.3.
206. And when the confidence coefficient of the target object is greater than a preset threshold value, the terminal determines that the object to be verified passes the verification.
For example, according to the setting of the application program, if the preset threshold is 0.87, the target object confidence of the small purple is 0.23 smaller than the preset threshold, and the terminal determines that the small purple is not verified, and determines that the small purple is an unreal user.
207. And the terminal fuses the confidence of the target object corresponding to the verified object to be verified and the initial labeling confidence to obtain the labeling confidence corresponding to the object labeling result.
For example, if the minuscule is an object to be verified that passes verification (the target content used for verification is the same as the fundamentals, the verification interface of the minuscule is shown in fig. 5), the initial labeling confidence of the minuscule is 0.84, and the historical labeling confidence is 0.67, then the labeling confidence 56 of the minuscule object labeling result (no) can be obtained based on the two confidences.
208. And the terminal determines the labeling result of the target content based on the labeling confidence corresponding to the object labeling result.
For example, for the label of fig. 5, two groups of object labeling results and their corresponding labeling confidence levels are respectively 66 (no object labeling result) and 50 (yes object labeling result), and the labeling confidence level 56 corresponding to the object labeling result (no) with a small significance is added, so that it can be found that the sum of the confidence levels of the object labeling results with no is 122, which is higher than the total confidence level of the object labeling results with yes, and finally the labeling result of "whether the size and the size are synonyms" can be determined as "no".
In the embodiment, a terminal firstly displays target content of an object to be verified to be marked, then the terminal obtains an object marking result of the object to be verified marked aiming at the target content, then the terminal marks the target content through a marking model to obtain a prediction marking result, the prediction marking result comprises at least two prediction results and a prediction confidence corresponding to each prediction result, the terminal also determines an initial marking confidence corresponding to the object marking result based on the prediction marking result of the target content, the terminal fuses the initial marking confidence and a historical marking confidence to obtain a target object confidence corresponding to the object to be verified, the historical marking confidence is a marking confidence obtained by marking the object to be verified aiming at the historical content, when the target object confidence is greater than a preset threshold, the object to be verified is determined to pass verification, and the terminal fuses the target object confidence corresponding to the object to be verified and the initial marking confidence corresponding to the object to be verified to pass verification and the initial marking confidence And finally, the terminal determines the labeling result of the target content based on the labeling confidence corresponding to the labeling result of the object.
According to the scheme, the object labeling result is not directly used as data for data judgment, but converted into the judgment confidence coefficient, the confidence coefficient of the target object is obtained by combining the confidence coefficient of the current operation (namely the initial labeling confidence coefficient) and the confidence coefficient of the historical operation (namely the historical labeling confidence coefficient), and the object is verified through the confidence coefficient of the target object. In addition, the scheme can combine data marking and object verification, and the data to be marked are marked while the object to be verified is verified, so that resources are saved remarkably, and the verification and marking efficiency is improved.
In order to better implement the object verification method provided by the embodiment of the present application, an embodiment of the present application further provides a device based on the object verification method. The meaning of the noun is the same as that in the object verification method, and specific implementation details can refer to the description in the method embodiment.
As shown in fig. 8, fig. 8 is a schematic structural diagram of an object verification apparatus according to an embodiment of the present application, where the object verification apparatus may include a first display module 301, an obtaining module 302, a determining module 303, a fusing module 304, and a verification module 305, where:
the first display module 301 is configured to display target content to be labeled of an object to be verified;
an obtaining module 302, configured to obtain an object tagging result that is tagged to a target content by an object to be verified;
the determining module 303 is configured to determine an initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content, where the predicted annotation result is obtained by annotating the target content by using an annotation model;
a fusion module 304, configured to fuse the initial annotation confidence and the historical annotation confidence to obtain a target object confidence corresponding to the object to be verified, where the historical annotation confidence is an annotation confidence obtained by annotating the object to be verified with respect to the historical content;
the verification module 305 is configured to determine that the object to be verified passes verification when the confidence of the target object meets a preset condition.
In some embodiments of the present application, the prediction labeling result includes at least two prediction results and a prediction confidence corresponding to each prediction result, as shown in fig. 9, the determining module 303 includes a determining submodule 3031 and an obtaining submodule 3032, wherein,
a determining submodule 3031, configured to determine, from the prediction labeling result, a target prediction result that is the same as the object labeling result;
the obtaining sub-module 3032 is configured to obtain an initial labeling confidence corresponding to the object labeling result based on the prediction confidence corresponding to the target prediction result.
In some embodiments of the present application, the fusion module 304 includes an acquisition sub-module and a weighting sub-module, wherein,
the obtaining submodule is used for obtaining a first weight corresponding to the initial labeling confidence coefficient and a second weight corresponding to the historical labeling confidence coefficient;
and the weighting submodule is used for weighting the initial identification confidence coefficient and the historical annotation confidence coefficient based on the first weight and the second weight to obtain a target object confidence coefficient corresponding to the object to be verified.
In some embodiments of the present application, the first display module 301 comprises a setup submodule, a determination submodule, and a display submodule, wherein,
the setting submodule is used for setting a marking difficulty coefficient for the target content of the object to be verified, which needs to be marked, based on the prediction marking result;
the determining submodule is used for determining the display condition of the target content according to the marking difficulty coefficient;
and the display submodule is used for displaying the target content of the object to be verified, which needs to be marked, when the display condition is triggered.
In some embodiments of the present application, the prediction labeling result includes at least two prediction results and a prediction confidence corresponding to each prediction result, and the setting sub-module is specifically configured to:
determining the maximum confidence coefficient of the prediction labeling result from the prediction confidence coefficients of the prediction labeling result; and setting a labeling difficulty coefficient for the target content of the object to be verified to be labeled based on the maximum confidence.
In some embodiments of the present application, the object authentication apparatus further comprises:
the confidence coefficient module is used for fusing the confidence coefficient of the target object and the initial labeling confidence coefficient to obtain a labeling confidence coefficient corresponding to the object labeling result;
and the labeling result module is used for determining the labeling result of the target content based on the labeling confidence corresponding to the object labeling result.
In some embodiments of the present application, the annotation result module comprises a determination sub-module, a fusion sub-module, and a setting sub-module, wherein,
the determining submodule is used for determining the labeling result of the target content based on the labeling confidence of the object labeling result, and comprises the following steps:
the fusion submodule is used for fusing the labeling confidence degrees corresponding to the same object labeling results in the candidate labeling results to obtain at least one candidate confidence degree, wherein the candidate labeling results comprise at least two object labeling results of the target content and a labeling confidence degree corresponding to each object labeling result;
and the setting submodule is used for determining the maximum value in the candidate confidence degrees to obtain a target confidence degree, and setting an object labeling result corresponding to the target confidence degree as a labeling result of the target content.
In some embodiments of the present application, the object authentication apparatus further comprises:
and the second display module is used for determining that the object to be verified does not pass the verification when the confidence coefficient of the target object does not meet the preset condition.
In some embodiments of the present application, the object authentication apparatus further comprises:
the preprocessing module is used for preprocessing the target content to obtain words to be annotated;
the generating module is used for mapping the words to be labeled to a vector space and generating word vectors corresponding to the words to be labeled;
and the prediction module is used for inputting the word vectors into the labeling model for labeling to obtain a prediction labeling result, and the prediction labeling result comprises at least two prediction results and a prediction confidence corresponding to each prediction result.
In this embodiment of the application, the first display module 301 first displays target content to be labeled of an object to be verified, the obtaining module 302 obtains an object labeling result of the object to be verified labeled with respect to the target content, the determining module 303 determines an initial labeling confidence corresponding to the object labeling result based on a predicted labeling result of the target content, the predicted labeling result is obtained by labeling the target content using a labeling model, the fusing module 304 fuses the initial labeling confidence and a historical labeling confidence to obtain a target object confidence corresponding to the object to be verified, the historical labeling confidence is a labeling confidence obtained by labeling the object to be verified with respect to the historical content, and finally, when the target object confidence satisfies a preset condition, the verifying module 305 determines that the object to be verified passes verification.
According to the scheme, the object labeling result is not directly used as data for data judgment, but converted into the judgment confidence coefficient, the confidence coefficient of the target object is obtained by combining the confidence coefficient of the current operation (namely the initial labeling confidence coefficient) and the confidence coefficient of the historical operation (namely the historical labeling confidence coefficient), and the object is verified through the confidence coefficient of the target object.
In addition, an embodiment of the present application further provides a computer device, where the computer device may be a terminal or a server, as shown in fig. 10, which shows a schematic structural diagram of the computer device according to the embodiment of the present application, and specifically:
the computer device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the computer device architecture illustrated in FIG. 10 is not intended to be limiting of computer devices and may include more or less components than those illustrated, or combinations of certain components, or different arrangements of components. Wherein:
the processor 401 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device as a whole. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user pages, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The computer device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
displaying target content to be marked of an object to be verified; acquiring an object labeling result of an object to be verified for labeling target content; determining an initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content, wherein the predicted annotation result is obtained by adopting an annotation model to label the target content; fusing the initial annotation confidence coefficient and the historical annotation confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical annotation confidence coefficient is an annotation confidence coefficient obtained by the object to be verified by labeling the historical content; and when the confidence coefficient of the target object meets the preset condition, determining that the object to be verified passes the verification.
According to the scheme, the object labeling result is not directly used as data for data judgment, but converted into the judgment confidence coefficient, the confidence coefficient of the target object is obtained by combining the confidence coefficient of the current operation (namely the initial labeling confidence coefficient) of the object and the confidence coefficient of the historical operation (namely the historical labeling confidence coefficient), and the object is verified through the confidence coefficient of the target object.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The system related to the embodiment of the application can be a distributed system formed by connecting a client and a plurality of nodes (computer devices in any form in an access network, such as servers and terminals) in a network communication mode.
Taking a distributed system as a blockchain system as an example, referring To fig. 11, fig. 11 is an optional structural schematic diagram of the distributed system 110 applied To the blockchain system provided in this embodiment of the present application, and is formed by a plurality of nodes 1101 (computing devices in any form in an access network, such as servers and user terminals) and a client 1102, a Peer-To-Peer (P2P, Peer To Peer) network is formed between the nodes, and the P2P Protocol is an application layer Protocol operating on top of a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the blockchain system shown in fig. 11, the functions involved include:
1) routing, the basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1) wallet, for providing the function of transaction of electronic money, including initiating transaction (i.e. sending the transaction record of current transaction to other nodes in the blockchain system, after the other nodes are successfully verified, storing the record data of transaction in the temporary blocks of the blockchain as the response of confirming the transaction is valid; of course, the wallet also supports the querying of the remaining electronic money in the electronic money address;
and 2.2) sharing the account book, wherein the shared account book is used for providing functions of operations such as storage, query and modification of account data, record data of the operations on the account data are sent to other nodes in the block chain system, and after the other nodes verify the validity, the record data are stored in a temporary block as a response for acknowledging that the account data are valid, and confirmation can be sent to the node initiating the operations.
2.3) Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by codes deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement codes, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to the merchant's address after the buyer signs for the goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
In this embodiment, the prediction labeling result, the object labeling result and the corresponding labeling confidence thereof, and the labeling result of the target content may be stored in a shared ledger of the area chain through a node, and a computer device (e.g., a terminal or a server) may obtain the prediction labeling result, the object labeling result and the corresponding labeling confidence thereof, and the labeling result of the target content based on data stored in the shared ledger.
Referring to fig. 12, fig. 12 is an optional schematic diagram of a Block Structure (Block Structure) provided in this embodiment, where each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and the blocks are connected by the hash value to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by a computer program, which may be stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the computer program.
To this end, embodiments of the present application further provide a storage medium, in which a computer program is stored, where the computer program can be loaded by a processor to execute the steps in any one of the object verification methods provided in the embodiments of the present application. For example, the computer program may perform the steps of:
displaying target content to be marked of an object to be verified; acquiring an object labeling result of an object to be verified for labeling target content; determining an initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content, wherein the predicted annotation result is obtained by adopting an annotation model to label the target content; fusing the initial annotation confidence coefficient and the historical annotation confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical annotation confidence coefficient is an annotation confidence coefficient obtained by the object to be verified by labeling the historical content; and when the confidence coefficient of the target object meets the preset condition, determining that the object to be verified passes the verification.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any object verification method provided in the embodiments of the present application, beneficial effects that can be achieved by any object verification method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The object verification method and the object verification device provided by the embodiment of the present application are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present application, and the description of the embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An object authentication method, comprising:
displaying target content to be marked of an object to be verified;
acquiring an object labeling result of the object to be verified for the target content labeling;
determining an initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content, wherein the predicted annotation result is obtained by adopting an annotation model to label the target content;
fusing the initial labeling confidence coefficient and the historical labeling confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical labeling confidence coefficient is a labeling confidence coefficient obtained by labeling the object to be verified aiming at historical contents;
and when the confidence coefficient of the target object meets a preset condition, determining that the object to be verified passes verification.
2. The method of claim 1, wherein the predicted annotation result comprises at least two predicted results and a prediction confidence corresponding to each predicted result, and wherein the determining the initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content comprises:
determining a target prediction result which is the same as the object labeling result from the prediction labeling results;
and acquiring an initial labeling confidence corresponding to the object labeling result based on the prediction confidence corresponding to the target prediction result.
3. The method according to claim 1, wherein the fusing the initial labeling confidence level and the historical labeling confidence level to obtain a target object confidence level corresponding to the object to be verified comprises:
acquiring a first weight corresponding to the initial labeling confidence coefficient and a second weight corresponding to the historical labeling confidence coefficient;
and based on the first weight and the second weight, carrying out weighting processing on the initial identification confidence coefficient and the historical annotation confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified.
4. The method according to claim 1, wherein the displaying of the target content to be labeled of the object to be verified comprises:
setting a marking difficulty coefficient for target content to be marked of the object to be verified based on the prediction marking result;
determining the display condition of the target content according to the marking difficulty coefficient;
and when the display condition is triggered, displaying the target content of the object to be verified, which needs to be marked.
5. The method according to claim 4, wherein the prediction labeling result includes at least two prediction results and a prediction confidence corresponding to each prediction result, and the setting of the labeling difficulty coefficient for the target content to be labeled of the object to be verified based on the prediction labeling result includes:
determining the maximum confidence coefficient of the prediction labeling result from the prediction confidence coefficients of the prediction labeling result;
and setting a labeling difficulty coefficient for the target content of the object to be verified to be labeled based on the maximum confidence.
6. The method of claim 1, further comprising:
fusing the confidence coefficient of the target object and the initial labeling confidence coefficient to obtain a labeling confidence coefficient corresponding to the object labeling result;
and determining the labeling result of the target content based on the labeling confidence corresponding to the object labeling result.
7. The method of claim 6, wherein the determining the labeling result of the target content based on the labeling confidence corresponding to the object labeling result comprises:
fusing labeling confidence degrees corresponding to the same object labeling results in the candidate labeling results to obtain at least one candidate confidence degree, wherein the candidate labeling results comprise at least two object labeling results of the target content and a labeling confidence degree corresponding to each object labeling result;
and determining the maximum value in the candidate confidence degrees to obtain a target confidence degree, and setting an object labeling result corresponding to the target confidence degree as a labeling result of the target content.
8. The method of claim 1, further comprising:
and when the confidence coefficient of the target object does not meet the preset condition, determining that the object to be verified does not pass the verification.
9. The method according to any one of claims 1 to 8, further comprising:
preprocessing the target content to obtain words to be annotated;
mapping the words to be labeled to a vector space to generate word vectors corresponding to the words to be labeled;
and inputting the word vector into a labeling model for labeling to obtain a prediction labeling result, wherein the prediction labeling result comprises at least two prediction results and a prediction confidence corresponding to each prediction result.
10. An object authentication apparatus, comprising:
the first display module is used for displaying target content to be marked of the object to be verified;
the acquisition module is used for acquiring an object labeling result of the object to be verified for the target content labeling;
the determining module is used for determining an initial annotation confidence corresponding to the object annotation result based on the predicted annotation result of the target content, wherein the predicted annotation result is obtained by adopting an annotation model to label the target content;
the fusion module is used for fusing the initial annotation confidence coefficient and the historical annotation confidence coefficient to obtain a target object confidence coefficient corresponding to the object to be verified, wherein the historical annotation confidence coefficient is an annotation confidence coefficient obtained by labeling the object to be verified aiming at historical content;
and the verification module is used for determining that the object to be verified passes the verification when the confidence coefficient of the target object meets a preset condition.
CN202010196376.XA 2020-03-19 2020-03-19 Object verification method and device Active CN111414609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010196376.XA CN111414609B (en) 2020-03-19 2020-03-19 Object verification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010196376.XA CN111414609B (en) 2020-03-19 2020-03-19 Object verification method and device

Publications (2)

Publication Number Publication Date
CN111414609A true CN111414609A (en) 2020-07-14
CN111414609B CN111414609B (en) 2024-01-26

Family

ID=71491239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010196376.XA Active CN111414609B (en) 2020-03-19 2020-03-19 Object verification method and device

Country Status (1)

Country Link
CN (1) CN111414609B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104038346A (en) * 2014-06-24 2014-09-10 五八同城信息技术有限公司 Verification method and system
CN104200140A (en) * 2014-09-28 2014-12-10 北京奇虎科技有限公司 Method and device providing verification code
CN104794385A (en) * 2015-03-03 2015-07-22 新浪网技术(中国)有限公司 Information verification method and device
US20160155000A1 (en) * 2013-11-30 2016-06-02 Beijing Zhigu Rui Tuo Tech Co., Ltd. Anti-counterfeiting for determination of authenticity
CN106156595A (en) * 2015-04-02 2016-11-23 深圳市腾讯计算机系统有限公司 A kind of method, Apparatus and system being carried out by identifying code picture verifying
CN106295278A (en) * 2016-08-11 2017-01-04 深圳市金立通信设备有限公司 A kind of method sending checking information and terminal
CN107241320A (en) * 2017-05-26 2017-10-10 微梦创科网络科技(中国)有限公司 A kind of man-machine discrimination method and identification system based on image
CN108010097A (en) * 2017-11-30 2018-05-08 广州品唯软件有限公司 Generation, verification method and the device of identifying code image
CN108121906A (en) * 2016-11-28 2018-06-05 阿里巴巴集团控股有限公司 A kind of verification method, device and computing device
CN109933971A (en) * 2019-02-27 2019-06-25 珠海格力电器股份有限公司 A kind of verification method based on identifying code, device, electronic equipment and storage medium
CN110162955A (en) * 2019-05-16 2019-08-23 同盾控股有限公司 Man-machine recognition methods, device, medium and electronic equipment
CN110598392A (en) * 2019-09-12 2019-12-20 同盾控股有限公司 Man-machine verification method and device, storage medium and electronic equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160155000A1 (en) * 2013-11-30 2016-06-02 Beijing Zhigu Rui Tuo Tech Co., Ltd. Anti-counterfeiting for determination of authenticity
CN104038346A (en) * 2014-06-24 2014-09-10 五八同城信息技术有限公司 Verification method and system
CN104200140A (en) * 2014-09-28 2014-12-10 北京奇虎科技有限公司 Method and device providing verification code
CN104794385A (en) * 2015-03-03 2015-07-22 新浪网技术(中国)有限公司 Information verification method and device
CN106156595A (en) * 2015-04-02 2016-11-23 深圳市腾讯计算机系统有限公司 A kind of method, Apparatus and system being carried out by identifying code picture verifying
CN106295278A (en) * 2016-08-11 2017-01-04 深圳市金立通信设备有限公司 A kind of method sending checking information and terminal
CN108121906A (en) * 2016-11-28 2018-06-05 阿里巴巴集团控股有限公司 A kind of verification method, device and computing device
CN107241320A (en) * 2017-05-26 2017-10-10 微梦创科网络科技(中国)有限公司 A kind of man-machine discrimination method and identification system based on image
CN108010097A (en) * 2017-11-30 2018-05-08 广州品唯软件有限公司 Generation, verification method and the device of identifying code image
CN109933971A (en) * 2019-02-27 2019-06-25 珠海格力电器股份有限公司 A kind of verification method based on identifying code, device, electronic equipment and storage medium
CN110162955A (en) * 2019-05-16 2019-08-23 同盾控股有限公司 Man-machine recognition methods, device, medium and electronic equipment
CN110598392A (en) * 2019-09-12 2019-12-20 同盾控股有限公司 Man-machine verification method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111414609B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN109919316A (en) The method, apparatus and equipment and storage medium of acquisition network representation study vector
CN109033068A (en) It is used to read the method, apparatus understood and electronic equipment based on attention mechanism
CN111831826B (en) Training method, classification method and device of cross-domain text classification model
CN111046158B (en) Question-answer matching method, model training method, device, equipment and storage medium
CN110727761B (en) Object information acquisition method and device and electronic equipment
CN111079015A (en) Recommendation method and device, computer equipment and storage medium
CN110443236A (en) Text will put information extracting method and device after loan
CN113014566B (en) Malicious registration detection method and device, computer readable medium and electronic device
US20220237917A1 (en) Video comparison method and apparatus, computer device, and storage medium
CN111324773A (en) Background music construction method and device, electronic equipment and storage medium
CN113011646A (en) Data processing method and device and readable storage medium
CN112925911A (en) Complaint classification method based on multi-modal data and related equipment thereof
CN115687647A (en) Notarization document generation method and device, electronic equipment and storage medium
CN113255327B (en) Text processing method and device, electronic equipment and computer readable storage medium
CN111143454B (en) Text output method and device and readable storage medium
CN114330476A (en) Model training method for media content recognition and media content recognition method
CN111931503B (en) Information extraction method and device, equipment and computer readable storage medium
CN111651989B (en) Named entity recognition method and device, storage medium and electronic device
CN116756281A (en) Knowledge question-answering method, device, equipment and medium
CN113392190B (en) Text recognition method, related equipment and device
CN111414609B (en) Object verification method and device
CN112989024B (en) Method, device and equipment for extracting relation of text content and storage medium
CN111859985B (en) AI customer service model test method and device, electronic equipment and storage medium
CN114444040A (en) Authentication processing method, authentication processing device, storage medium and electronic equipment
CN114579860B (en) User behavior portrait generation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant