CN112580674A - Picture identification method, computer equipment and storage medium - Google Patents

Picture identification method, computer equipment and storage medium Download PDF

Info

Publication number
CN112580674A
CN112580674A CN201910926845.6A CN201910926845A CN112580674A CN 112580674 A CN112580674 A CN 112580674A CN 201910926845 A CN201910926845 A CN 201910926845A CN 112580674 A CN112580674 A CN 112580674A
Authority
CN
China
Prior art keywords
information
picture
data
content information
information element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910926845.6A
Other languages
Chinese (zh)
Inventor
李建
吴攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910926845.6A priority Critical patent/CN112580674A/en
Publication of CN112580674A publication Critical patent/CN112580674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Abstract

The embodiment of the application discloses a picture identification method. The method comprises the following steps: the method comprises the steps of obtaining content information, wherein the content information comprises a plurality of information elements, the plurality of information elements comprise at least one picture information element, extracting feature data of the information elements in the content information, determining difference data between the picture information elements and other information elements according to the feature data, determining a feature type of the content information according to the feature data and the difference data, introducing dimension analysis of difference between pictures, analyzing the feature type on the whole of the content information, avoiding the problem of low accuracy when the feature is analyzed by depending on a single picture, and improving the accuracy of determining the feature type of the content information.

Description

Picture identification method, computer equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a picture recognition method, a computer device, and a computer-readable storage medium.
Background
With the popularization of the mobile internet, the network transaction amount is larger and larger, the supervision and treatment of false evaluation are increasingly urgent, and a good online shopping environment is built. However, professional goodness, badness, "acceptance" has formed a professional gray industry chain.
In the electronic commerce transaction, the evaluation of the quality, the use feeling, and the like of the transaction product is normal. However, the target product is not objectively described in the evaluation content, but other information added for the purpose of guiding the user to purchase other products and the like, including but not limited to a communication account number, preferential information, a web page link and the like, and the advertisement evaluation is abnormal and belongs to the evaluation content with risks.
The applicant finds that the form of the malicious evaluation is changed very quickly, and the identification of the malicious picture is more difficult than the identification of the malicious text. For example, since the picture includes various kinds of hidden advertisement information, the advertisement information may be blurred, hidden, or varied in the picture, and thus, the method of identifying the advertisement based on a single picture, such as OCR (Optical Character Recognition), has a problem of low accuracy.
Disclosure of Invention
In view of the above, the present application is proposed to provide a picture recognition method as well as a computer device, a computer readable storage medium, which overcome or at least partially solve the above problems.
According to an aspect of the present application, there is provided a picture recognition method, including:
acquiring content information, wherein the content information comprises a plurality of information elements, and the plurality of information elements comprise at least one picture information element;
extracting characteristic data of information elements in the content information;
determining difference data between the picture information element and other information elements according to the characteristic data;
and determining the characteristic type of the content information according to the characteristic data and the difference data.
Optionally, the determining the feature type of the content information according to the feature data and the difference data includes:
determining the characteristic value of the information element by taking the characteristic data of the information element as input according to a characteristic value recognition model;
and determining the characteristic type of the content information according to the characteristic value and the difference data.
Optionally, before the determining the feature value of the information element according to a feature value recognition model by using the feature data of the information element as an input, the method further includes:
and training the characteristic value recognition model by adopting the information element samples and the characteristic types of the corresponding marks.
Optionally, the content information includes a text information element, the feature data includes description information, and the determining, according to the feature data, difference data between the picture information element and other information elements includes:
and determining difference data between the picture information element and the text information element according to the description information of the picture information element and the description information of the text information element.
Optionally, the determining, according to the feature data, difference data between the picture information element and other information elements includes:
and comparing the characteristic data among the picture information elements to obtain difference data among the picture information elements.
Optionally, the determining, according to the feature data, difference data between the picture information element and other information elements includes:
clustering a plurality of picture information elements according to the characteristic data;
calculating difference data between the clusters of pictures as difference data between the picture information elements.
Optionally, before the extracting the feature data of the information element in the content information, the method further includes:
searching the associated information of the content information, wherein the associated information comprises at least one of the following: a picture information element, a text information element, a video information element;
and adding the associated information into the content information.
Optionally, the associated information includes a picture information element, and before the searching for the associated information of the content information, the method further includes:
and determining that the number of picture information elements in the content information does not meet the preset requirement.
Optionally, the content information includes at least one of comment content information and video content information.
Optionally, when the content information is video content information, before the extracting feature data of information elements in the content information, the method further includes:
extracting a plurality of video frames in the video content information as the picture information element.
According to another aspect of the present application, there is provided a picture recognition apparatus including:
the information acquisition module is used for acquiring content information, wherein the content information comprises a plurality of information elements, and the plurality of information elements comprise at least one picture information element;
the data extraction module is used for extracting the characteristic data of the information elements in the content information;
a difference determining module, configured to determine difference data between the picture information element and other information elements according to the feature data;
and the type determining module is used for determining the characteristic type of the content information according to the characteristic data and the difference data.
Optionally, the type determining module includes:
the characteristic value determining submodule is used for determining the characteristic value of the information element by taking the characteristic data of the information element as input according to a characteristic value recognition model;
and the type determining submodule is used for determining the characteristic type of the content information according to the characteristic value and the difference data.
Optionally, the apparatus further comprises:
and the training module is used for training the characteristic value recognition model by adopting an information element sample and the characteristic type of the corresponding mark before determining the characteristic value of the information element according to the characteristic value recognition model by taking the characteristic data of the information element as input.
Optionally, the content information includes a text information element, the feature data includes description information, and the difference determining module includes:
and the difference determining submodule is used for determining difference data between the picture information element and the text information element according to the description information of the picture information element and the description information of the text information element.
Optionally, the difference determining module comprises:
and the comparison submodule is used for comparing the characteristic data among the picture information elements to obtain the difference data among the picture information elements.
Optionally, the difference determining module comprises:
the clustering submodule is used for clustering a plurality of picture information elements according to the characteristic data;
and the calculating sub-module is used for calculating difference data between the picture clusters as the difference data between the picture information elements.
Optionally, the apparatus further comprises:
a searching module, configured to search, before extracting feature data of an information element in the content information, associated information of the content information, where the associated information includes at least one of: a picture information element, a text information element, a video information element;
and the adding module is used for adding the associated information into the content information.
Optionally, the associated information includes a picture information element, and the apparatus further includes:
and the determining module is used for determining that the number of the picture information elements in the content information does not meet the preset requirement before the associated information of the content information is searched.
Optionally, the content information includes at least one of comment content information and video content information.
Optionally, when the content information is video content information, the apparatus further includes:
and the video frame extraction module is used for extracting a plurality of video frames in the video content information as the picture information elements before extracting the characteristic data of the information elements in the content information.
According to another aspect of the application, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to one or more of the above when executing the computer program.
According to another aspect of the application, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the method according to one or more of the above.
According to the embodiment of the application, by acquiring the content information, the content information comprises a plurality of information elements, the plurality of information elements comprise at least one picture information element, the feature data of the information elements in the content information is extracted, the difference data between the picture information elements and other information elements is determined according to the feature data, the feature type of the content information is determined according to the feature data and the difference data, the analysis of the dimension of the difference between pictures is introduced, the feature type can be analyzed on the whole from the content information, the problem of low accuracy when the feature is analyzed by depending on a single picture is solved, and the accuracy of determining the feature type of the content information is improved.
Furthermore, the acquired content information is supplemented by searching the associated information of the content information and adding the associated information into the content information, so that the problem that the information elements in the content information are insufficient is solved, the problem that the mode cannot be implemented due to insufficient information elements is solved, or the dimension of performing feature analysis on the content information can be increased by increasing the number of the information elements, and the accuracy of the feature analysis is further improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a schematic diagram of a picture recognition process;
fig. 2 is a flowchart illustrating an embodiment of a picture recognition method according to a first embodiment of the present application;
FIG. 3 shows a schematic diagram of a risk identification process for merchandise reviews;
FIG. 4 is a flowchart illustrating an embodiment of a picture recognition method according to a second embodiment of the present application;
FIG. 5 is a block diagram illustrating an embodiment of a picture recognition apparatus according to a third embodiment of the present application;
fig. 6 illustrates an exemplary system that can be used to implement the various embodiments described in this disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
To enable those skilled in the art to better understand the present application, the following description is made of the concepts related to the present application:
the content information includes information in the form of pictures, texts, videos, audios, and the like, and in the present application, the content information is composed of a plurality of information elements and the plurality of information elements includes at least one picture information element. The information element may include information elements in the form of pictures, texts, videos, audios, and the like, or any other suitable form, which is not limited in this application. The content information may have a single form of information element or may have a plurality of forms of information elements. There may be one or more of the various information elements.
For example, in an e-commerce transaction, the commodity evaluation submitted by the transaction commodity belongs to a kind of content information, which may include a picture information element, a text information element, etc., or the content of the question and answer area in the commodity detail page also belongs to a kind of content information. Or in a video network platform, videos such as movie and television play belong to a content information, the videos are composed of video frame sequences, video frames of the videos can be used as picture information elements, and audio of the videos can be used as audio information elements.
The feature data includes various data used for representing the information elements, for example, for the picture information elements, a space vector model is used to represent each picture information element, each picture information element is vectorized, or any other suitable feature data, which is not limited in this embodiment of the present application. The description information of the information element can also be regarded as a kind of feature data, for example, a subject word of the text information element is a kind of description information, a subject of the text can be obtained using a TextCNN (text convolutional neural network) model, and the subject word can describe some characteristic of the text, so the subject word can be regarded as a kind of feature data of the text information element. Any applicable description information can be specifically included, and the embodiment of the present application is not limited thereto.
The difference data between the information elements is used to represent the size of the difference between the information elements, and specifically, the difference data may be obtained by comparing the feature data of the information elements, or any other suitable manner, which is not limited in this application. The difference data may be difference data between two information elements, or difference data between multiple information elements, where the difference data between multiple information elements may include a sum or an average of the difference data between two information elements, and the like, which is not limited in this embodiment of the present application.
In the present application, in order to overcome the problem that it is difficult to identify a malicious picture by relying on a single picture information element, the difference data to be determined is between the picture information element and other picture information elements or information elements in a non-picture form. The greater the difference between the picture information element and the other information elements, the greater the likelihood that the picture information element is a malicious picture, and in turn, the greater the likelihood that the content information is malicious information. Conversely, the smaller the difference between the picture information element and the other information elements, the less likely the picture information element is a malicious picture, and consequently the less likely the content information is malicious information.
For example, for difference data between picture information elements, the picture information elements may be represented by vectors, one way may be to obtain the difference data by cosine similarity calculation, and the similarity between two picture information elements may be evaluated by calculating cosine values of the vectors of the two picture information elements, i.e. obtaining the difference data between the two picture information elements; in another mode, a Pearson correlation coefficient (Pearson correlation coefficient) may be used to calculate difference data, and a quotient between a covariance and a standard deviation between vectors of two picture information elements may be calculated to evaluate a correlation between the two picture information elements, so that the difference data between the two picture information elements is obtained.
The content information can be classified into a certain feature type according to feature data and difference data of the information elements, the feature type is used for representing the characteristics of a certain dimension of the content information, for example, for commodity evaluation, the feature type can be classified into advertisement evaluation and normal evaluation, or can be classified into malicious evaluation and non-malicious evaluation, or can be classified into multiple types such as serious violation, slight violation, suspected violation, non-violation and the like; for a video, the feature types may be classified into a replaced advertisement and an un-replaced advertisement, or may be classified into a face-changed or un-changed advertisement, or may be classified into a changed image or an un-changed image, or may be classified into a changed video or an un-changed video, and the like, and specifically, any applicable feature type may be included, which is not limited in this embodiment of the present application.
In an alternative embodiment of the present application, in order to determine the feature type of the content information, the degree of the information element in the content information in such feature dimension may be determined first, and recorded as a feature value. The characteristic values of the information elements are used to characterize the degree value of the information elements over the corresponding characteristic. Taking the characteristics as the malicious degree as an example, in a numerical value interval of 0-1, the larger the numerical value is, the more serious the malicious degree is, the malicious degree of a certain information element is 0.6, and the more serious malicious degree is represented. It will be appreciated that text or symbols may also be employed to represent characteristic values, such as a message element being ten degrees malicious (e.g., levels 1-10 corresponding to a preceding numerical interval of 0-1).
For this purpose, a characteristic value recognition model can be used for the determination, into which characteristic data of the information element are input, and which is calculated to output a characteristic value of the information element. The characteristic value of the information element can be output because the characteristic value recognition model adopts a supervised learning mode and trains the model according to a large number of information element samples marked with characteristic types, so that the characteristic value recognition model can evaluate the characteristic value of the information element.
For example, a large number of picture information elements for advertisement evaluation and normal evaluation are collected, two feature types of advertisement evaluation and normal evaluation are adopted to mark the picture information elements, a supervised learning manner is adopted to train the picture information elements to obtain a two-classification model, the two-classification model can attribute the picture information elements to the two feature types of advertisement evaluation or normal evaluation, but the picture information elements output by the two-classification model are not required to attribute to a certain feature type, but parameters which can represent the degree of the picture information elements on the feature type in the two-classification model are output to serve as feature values of the picture information elements.
In an alternative embodiment of the present application, when the content information includes a text information element, in order to determine difference data between the picture information element and the text information element, description information of the information element is obtained first, and the description information also belongs to a kind of feature data.
For example, the descriptive information of the text information element may obtain the subject word in the text information element as the descriptive information by using a TextCNN model or using a TF-IDF (term frequency-inverse term frequency) algorithm or the like. The description information of the picture information element may obtain a main target or a scene included in the picture information element as the description information by using a Mask R-CNN (Mask Region-based Convolutional Neural Network) model or a picture scene recognition model. The description information of the information element may be obtained in any suitable manner, which is not limited in the embodiment of the present application.
In an optional embodiment of the present application, the content information has associated information in an application system or a data system, and is denoted as associated information. The association information includes at least one of: the information element may be a picture information element, a text information element, a video information element, or any other suitable form, which is not limited in this application.
For example, for content information such as a product review, a product for which the product review is directed has detail information of the product in an e-commerce platform, and the detail information is related information of the product review. The display method comprises a text for describing the commodity, a display picture of the commodity, even a demonstration video of the use process of the commodity and the like. When the commodity comment is stored, the commodity ID (identification) is correspondingly stored, the inquiry is carried out according to the commodity ID, the detailed information of the commodity can be found, and the required information element can be extracted from the detailed information. Any suitable searching manner can be specifically adopted, and the embodiment of the present application does not limit this.
In an optional embodiment of the present application, when the content information is video content information, that is, the obtained content information is data in a video format, the video is composed of a sequence of video frames, and the video frames may be regarded as picture information elements.
For example, the scheme of the application can be used in application scenarios of tamper detection, in movie and television drama works, an advertiser implants an advertised commodity in a certain scene in a video, but an infringer processes the video, and the advertised commodity is tampered with by other commodities in the video in part of time. Or in the application scene of video advertisement detection, that is, whether video frames with advertisements embedded exist in the video is detected. Or in the application scenario of personal privacy detection, that is, whether video frames of content related to personal privacy exist in the video is detected.
In one implementation manner for the application scenario: the method comprises the steps of obtaining video frames of different times in a scene in a video, taking the obtained video frames as picture information elements, extracting feature data of the picture information elements, determining difference data among the picture information elements, and then determining whether the video is a tampered video or a video implanted with an advertisement or a video related to personal privacy or not according to the feature data and the difference data.
In another implementation manner for the application scenario: the content information comprises a video to be detected and a sample video frame (such as a tampered video frame, a video frame embedded with an advertisement, a video frame related to personal privacy and the like), the video frame sequence to be detected is input, tampering or advertisement detection is carried out, feature data of the video frame to be detected and the sample video frame are extracted, difference data between the video frame to be detected and the sample video frame is determined, and then whether the video is the tampered video, the video embedded with the advertisement or the video related to the personal privacy is determined according to the feature data and the difference data. Wherein, the sample video frame can have positive sample or negative sample. For example, a positive-going sample is an untampered video frame, then a negative-going sample is a tampered video frame; positive going samples are video frames with advertisements implanted, and negative going samples are video frames with no advertisements implanted.
According to an embodiment of the application, when the evaluation content includes a malicious picture, for example, the malicious picture includes various hidden advertisement information, and the like, the problem of low accuracy exists in identifying a malicious mode based on a single picture. As shown in the schematic diagram of the picture recognition process shown in fig. 1, the present application provides a picture recognition mechanism, by obtaining content information, where the content information includes a plurality of information elements, where the plurality of information elements includes at least one picture information element, extracting feature data of the information element in the content information, determining difference data between the picture information element and other information elements according to the feature data, determining a feature type of the content information according to the feature data and the difference data, and introducing a dimension analysis of a difference between pictures, the feature type can be analyzed from the whole content information, so that a problem of low accuracy when a single picture is relied on to analyze features is avoided, and the accuracy of determining the feature type of the content information is improved. The present application is applicable to, but not limited to, the above application scenarios.
Referring to fig. 2, a flowchart of an embodiment of a picture identification method according to a first embodiment of the present application is shown, where the method specifically includes the following steps:
step 101, content information is obtained, wherein the content information comprises a plurality of information elements, and the plurality of information elements comprise at least one picture information element.
In the embodiment of the application, in order to solve the problem that malicious pictures are difficult to identify, content information of a feature type to be determined is obtained first, wherein the content information is composed of a plurality of information elements and comprises at least one picture information element.
For example, in e-commerce transaction, when a user submits a commodity evaluation to a transaction commodity, the commodity evaluation is acquired from the e-commerce data system and analyzed before being displayed on a comment page, so as to avoid that the commodity evaluation with problems is published to the comment page. Or in the video network platform, after the user uploads the homemade video, before the video is provided to other users for watching, the user first obtains the video from the data system of the video network platform to analyze, and of course, for the video content information, it is necessary to extract the video frame from the video as the picture information element.
And 102, extracting characteristic data of information elements in the content information.
In the embodiment of the present application, the implementation manner of extracting the feature data of the information element in the content information includes multiple manners, for the picture information element, the picture information element may be vectorized, the obtained vector is used as the feature data of the picture information element, and if the difference data between the picture information element and the text information element needs to be determined subsequently, the picture information element may be subjected to target recognition or scene recognition, so as to obtain description information, such as a main target or a located scene, included in the picture, as the feature data. For the text information element, description information such as a subject word of the text may be recognized as the feature data. Any suitable extraction method can be specifically adopted, and the embodiment of the application is not limited to this.
For example, as shown in fig. 3, a schematic diagram of a risk identification process for a product review includes 1 text information element and 5 picture information elements. And vectorizing each picture information element to obtain a vector corresponding to each picture information element as the characteristic data of the picture information element. The subject word of the text information element is recognized as feature data of the text information element.
Step 103, determining difference data between the picture information element and other information elements according to the feature data.
In the embodiment of the present application, difference data between the picture information element and other information elements may be determined according to the feature data of the information elements. When the content information includes a plurality of picture information elements, the picture information elements and other information elements include the picture information elements and other picture information elements, and may also include the picture information elements and other information elements in a non-picture form. When the content information includes one picture information element, the picture information element and the other information elements include the picture information element and the other information elements in a non-picture form.
In the embodiment of the present application, the implementation manner of determining the difference data between the picture information elements and other information elements may include multiple manners, for example, comparing the feature data between the picture information elements to obtain the difference data between the picture information elements; or clustering a plurality of picture information elements according to the characteristic data, and calculating difference data among picture clusters to serve as the difference data among the picture information elements; or determining difference data between the picture information element and the text information element according to the description information of the picture information element and the description information of the text information element, or any other suitable implementation manner, which is not limited in the embodiment of the present application.
For example, as shown in fig. 3, two of 5 picture information elements are compared, and by comparing feature data corresponding to the picture information elements, difference degree values between two of the 5 picture information elements, that is, difference data, are obtained, the obtained 5 difference data may be directly used, or 1 or more difference data indicating the largest difference among the 5 difference data may be selected for use, or an average value of the 5 difference data may be calculated to obtain a total difference data for use, or any other suitable manner, which is not limited in this embodiment of the present application.
And 104, determining the characteristic type of the content information according to the characteristic data and the difference data.
In the embodiment of the present application, the content information may be classified into a certain feature type according to the feature data of the information element and the difference data obtained in the previous step. Specifically, the determination may be performed according to feature data of all information elements in the content information and difference data obtained in the previous step, or may be performed according to feature data of some information elements in the content information and difference data obtained in the previous step, which is not limited in this embodiment of the present application.
In this embodiment of the present application, the implementation manner for determining the feature type of the content information may include multiple implementations, for example, the feature data of the information element is used as an input, the feature value of the information element is determined according to the feature value recognition model, the feature type of the content information is determined according to the feature value and the difference data, or any other suitable implementation manner, which is not limited in this embodiment of the present application.
For example, as shown in fig. 3, feature data of 1 text information element in a product review, that is, a subject word, is input into a risk value recognition model for a text to obtain a risk value of the text information element, weighted average calculation is performed according to the risk value and a difference degree value obtained in the previous step to obtain a comprehensive risk value, and if the comprehensive risk value exceeds a predetermined threshold, it is determined that the product review belongs to a risky feature type.
According to the embodiment of the application, by acquiring the content information, the content information comprises a plurality of information elements, the plurality of information elements comprise at least one picture information element, the feature data of the information elements in the content information is extracted, the difference data between the picture information elements and other information elements is determined according to the feature data, the feature type of the content information is determined according to the feature data and the difference data, the analysis of the dimension of the difference between pictures is introduced, the feature type can be analyzed on the whole from the content information, the problem of low accuracy when the feature is analyzed by depending on a single picture is solved, and the accuracy of determining the feature type of the content information is improved.
Referring to fig. 4, a flowchart of an embodiment of a picture identification method according to the second embodiment of the present application is shown, where the method specifically includes the following steps:
step 201, content information is obtained, where the content information includes a plurality of information elements, and the plurality of information elements includes at least one picture information element.
In the embodiment of the present application, a specific implementation manner of this step may refer to the description in the foregoing embodiment, and is not described herein again.
In the embodiment of the present application, optionally, the content information includes at least one of comment content information and video content information. For example, in an e-commerce transaction, a product evaluation submitted by a user for a product belongs to a kind of comment content information. In a video network platform, a self-made video uploaded by a user belongs to video content information. Specifically, any applicable comment content information or video content information may be used, which is not limited in the embodiment of the present application.
Step 202, searching for associated information of the content information, where the associated information includes at least one of: picture information elements, text information elements, video information elements.
In the embodiment of the present application, before extracting the feature data of the information element in the content information, the associated information of the content information may also be searched. For example, for a product review, detail information of a product targeted by the product review is searched, a picture in the detail information is determined as associated information, and the search can be specifically performed according to actual needs, which is not limited in the embodiment of the present application.
In this embodiment of the application, optionally, in some implementation scenarios, difference data between the picture information elements is needed, and the associated information to be searched includes the picture information elements. Before searching for the associated information of the content information, the method may further include: and determining that the number of picture information elements in the content information does not meet the preset requirement. The number of preset requirements can be specifically set according to actual needs, and the embodiment of the application is not limited to this.
For example, the commodity comment only includes 1 picture information element, and the number of the preset requirements is 2, and when it is determined that the number of the picture information elements in the commodity comment does not meet the preset requirements, it indicates that the picture information elements in the content information are insufficient. And searching pictures in the detailed information of the commodity according to the commodity comment as associated information, and supplementing the associated information into the content information.
Step 203, adding the related information to the content information.
In the embodiment of the application, the searched associated information is added to the content information, so that the acquired content information is supplemented, the problem that the content information is insufficient in information elements is solved, the problem that the method cannot be implemented due to insufficient information elements is solved, or the dimension of performing feature analysis on the content information can be increased by increasing the number of the information elements, and the accuracy of the feature analysis is further improved.
And 204, extracting the characteristic data of the information elements in the content information.
In the embodiment of the present application, a specific implementation manner of this step may refer to the description in the foregoing embodiment, and is not described herein again.
Step 205, clustering a plurality of picture information elements according to the feature data.
In the embodiment of the present application, when calculating difference data between a plurality of picture information elements, a process of dividing the plurality of picture information elements into a plurality of classes composed of similar picture information elements in a clustering manner may be used for the plurality of picture information elements, which is referred to as clustering.
In the embodiment of the present application, the feature data of the plurality of picture information elements are clustered, that is, the plurality of picture information elements are clustered. The cluster generated by clustering is a set of picture information elements, the picture information elements are similar to the picture information elements in the same cluster and different from the picture information elements in other clusters, and the picture information element of one cluster is marked as a picture cluster. For example, 5 picture information elements in a commodity comment are clustered, wherein 3 picture information elements are attributed to one picture cluster, and the other 2 picture information elements are attributed to another picture cluster.
In step 206, difference data between the picture clusters is calculated as difference data between the picture information elements.
In the embodiment of the present application, after clustering is performed, difference data between picture clusters may be calculated as difference data between picture information elements. The implementation manner of calculating the difference data between the picture clusters may include multiple manners, for example, calculating the difference data between each picture information element in one picture cluster and each picture information element in another picture cluster to obtain multiple difference data, and then calculating an average value of the multiple difference data to obtain the difference data between two picture clusters. Any suitable calculation method may be specifically included, and this is not limited in this embodiment of the present application.
In this embodiment of the application, optionally, the content information includes a text information element, the feature data includes description information, and one implementation manner of determining difference data between the picture information element and other information elements according to the feature data may include: and determining difference data between the picture information element and the text information element according to the description information of the picture information element and the description information of the text information element.
The description information of the picture information element and the description information of the text information element are compared, and difference data between the picture information element and the text information element can be obtained. The difference between the description information may characterize the difference between the picture information element and the text information element. For example, text vectorization is performed on two pieces of description information, difference data can be obtained by calculating the distance between vectors corresponding to the two pieces of description information, the description information of the picture information element is "sock", and the difference data obtained when the description information of the text information element is "cotton" is smaller than the difference data obtained when the description information of the text information element is "plastic".
And step 207, determining the characteristic value of the information element according to a characteristic value recognition model by taking the characteristic data of the information element as input.
In the embodiment of the application, when the characteristic value recognition model is used for determining the characteristic value of the information element, the input data is the characteristic data of the information element, and the characteristic value recognition model can output the characteristic value of the information element through calculation.
In this embodiment of the application, optionally, before determining the feature value of the information element according to a feature value recognition model by using the feature data of the information element as an input, the method may further include: and training the characteristic value recognition model by adopting the information element samples and the characteristic types of the corresponding marks.
The characteristic value recognition model needs to be trained to accurately recognize the characteristic value of the information element. Collecting a large number of information element samples, marking the characteristic types corresponding to all the information elements, inputting the information element samples into a characteristic value recognition model, and continuously learning and updating parameters in the characteristic value recognition model until the required performance is achieved.
Step 208, determining the characteristic type of the content information according to the characteristic value and the difference data.
In the embodiment of the application, after the characteristic value is obtained by adopting the characteristic value recognition model, the characteristic value and the difference data are integrated to determine the characteristic type of the content information. For example, after performing weighted average calculation on the feature value and the difference data to obtain a comprehensive value, if the comprehensive value exceeds a preset threshold, it is determined that the content information is of one feature type, and if the comprehensive value does not exceed the preset threshold, it is determined that the content information is of another feature type.
According to an embodiment of the application, by obtaining content information, the content information comprising a plurality of information elements, the plurality of information elements comprise at least one picture information element, feature data of the information elements in the content information is extracted, and according to the feature data, clustering a plurality of picture information elements, calculating difference data between the picture clusters as difference data between the picture information elements, taking the characteristic data of the information elements as input, according to the feature value recognition model, the feature value of the information element is determined, the feature type of the content information is determined according to the feature value and the difference data, analysis of the dimension of difference between pictures is introduced, the feature type can be analyzed on the whole of the content information, the problem of low accuracy when the feature is analyzed by depending on a single picture is solved, and the accuracy of determining the feature type of the content information is improved.
Furthermore, the acquired content information is supplemented by searching the associated information of the content information and adding the associated information into the content information, so that the problem that the information elements in the content information are insufficient is solved, the problem that the mode cannot be implemented due to insufficient information elements is solved, or the dimension of performing feature analysis on the content information can be increased by increasing the number of the information elements, and the accuracy of the feature analysis is further improved.
Referring to fig. 5, a block diagram of a structure of an embodiment of an image recognition apparatus according to a third embodiment of the present application is shown, which may specifically include:
an information obtaining module 301, configured to obtain content information, where the content information includes a plurality of information elements, and the plurality of information elements includes at least one picture information element;
a data extraction module 302, configured to extract feature data of information elements in the content information;
a difference determining module 303, configured to determine, according to the feature data, difference data between the picture information element and other information elements;
a type determining module 304, configured to determine a feature type of the content information according to the feature data and the difference data.
In this embodiment of the application, optionally, the type determining module includes:
the characteristic value determining submodule is used for determining the characteristic value of the information element by taking the characteristic data of the information element as input according to a characteristic value recognition model;
and the type determining submodule is used for determining the characteristic type of the content information according to the characteristic value and the difference data.
In this embodiment of the present application, optionally, the apparatus further includes:
and the training module is used for training the characteristic value recognition model by adopting an information element sample and the characteristic type of the corresponding mark before determining the characteristic value of the information element according to the characteristic value recognition model by taking the characteristic data of the information element as input.
In this embodiment of the application, optionally, the content information includes a text information element, the feature data includes description information, and the difference determining module includes:
and the difference determining submodule is used for determining difference data between the picture information element and the text information element according to the description information of the picture information element and the description information of the text information element.
In this embodiment of the application, optionally, the difference determining module includes:
and the comparison submodule is used for comparing the characteristic data among the picture information elements to obtain the difference data among the picture information elements.
In this embodiment of the application, optionally, the difference determining module includes:
the clustering submodule is used for clustering a plurality of picture information elements according to the characteristic data;
and the calculating sub-module is used for calculating difference data between the picture clusters as the difference data between the picture information elements.
In this embodiment of the present application, optionally, the apparatus further includes:
a searching module, configured to search, before extracting feature data of an information element in the content information, associated information of the content information, where the associated information includes at least one of: a picture information element, a text information element, a video information element;
and the adding module is used for adding the associated information into the content information.
In this embodiment of the application, optionally, the associated information includes a picture information element, and the apparatus further includes:
and the determining module is used for determining that the number of the picture information elements in the content information does not meet the preset requirement before the associated information of the content information is searched.
In the embodiment of the present application, optionally, the content information includes at least one of comment content information and video content information.
In this embodiment of the application, optionally, when the content information is video content information, the apparatus further includes:
and the video frame extraction module is used for extracting a plurality of video frames in the video content information as the picture information elements before extracting the characteristic data of the information elements in the content information.
According to the embodiment of the application, by acquiring the content information, the content information comprises a plurality of information elements, the plurality of information elements comprise at least one picture information element, the feature data of the information elements in the content information is extracted, the difference data between the picture information elements and other information elements is determined according to the feature data, the feature type of the content information is determined according to the feature data and the difference data, the analysis of the dimension of the difference between pictures is introduced, the feature type can be analyzed on the whole from the content information, the problem of low accuracy when the feature is analyzed by depending on a single picture is solved, and the accuracy of determining the feature type of the content information is improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Embodiments of the disclosure may be implemented as a system using any suitable hardware, firmware, software, or any combination thereof, in a desired configuration. Fig. 6 schematically illustrates an exemplary system (or apparatus) 700 that can be used to implement various embodiments described in this disclosure.
For one embodiment, fig. 6 illustrates an exemplary system 700 having one or more processors 702, a system control module (chipset) 704 coupled to at least one of the processor(s) 702, a system memory 706 coupled to the system control module 704, a non-volatile memory (NVM)/storage 708 coupled to the system control module 704, one or more input/output devices 710 coupled to the system control module 704, and a network interface 712 coupled to the system control module 706.
The processor 702 may include one or more single-core or multi-core processors, and the processor 702 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the system 700 can function as a browser as described in embodiments herein.
In some embodiments, system 700 may include one or more computer-readable media (e.g., system memory 706 or NVM/storage 708) having instructions and one or more processors 702 in combination with the one or more computer-readable media configured to execute the instructions to implement modules to perform the actions described in this disclosure.
For one embodiment, system control module 704 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 702 and/or any suitable device or component in communication with system control module 704.
The system control module 704 may include a memory controller module to provide an interface to the system memory 706. The memory controller module may be a hardware module, a software module, and/or a firmware module.
System memory 706 may be used to load and store data and/or instructions for system 700, for example. For one embodiment, system memory 706 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 706 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 704 may include one or more input/output controllers to provide an interface to NVM/storage 708 and input/output device(s) 710.
For example, NVM/storage 708 may be used to store data and/or instructions. NVM/storage 708 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard disk drive(s) (HDD (s)), one or more Compact Disc (CD) drive(s), and/or one or more Digital Versatile Disc (DVD) drive (s)).
NVM/storage 708 may include storage resources that are physically part of the device on which system 700 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 708 may be accessible over a network via input/output device(s) 710.
Input/output device(s) 710 may provide an interface for system 700 to communicate with any other suitable device, input/output device(s) 710 may include communication components, audio components, sensor components, and the like. Network interface 712 may provide an interface for system 700 to communicate over one or more networks, and system 700 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof.
For one embodiment, at least one of the processor(s) 702 may be packaged together with logic for one or more controller(s) (e.g., memory controller module) of system control module 704. For one embodiment, at least one of the processor(s) 702 may be packaged together with logic for one or more controller(s) of system control module 704 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 702 may be integrated on the same die with logic for one or more controller(s) of system control module 704. For one embodiment, at least one of the processor(s) 702 may be integrated on the same die with logic for one or more controller(s) of system control module 704 to form a system on a chip (SoC).
In various embodiments, system 700 may be, but is not limited to being: a browser, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 700 may have more or fewer components and/or different architectures. For example, in some embodiments, system 700 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
Wherein, if the display includes a touch panel, the display screen may be implemented as a touch screen display to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The present application further provides a non-volatile readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a terminal device, the one or more modules may cause the terminal device to execute instructions (instructions) of method steps in the present application.
In one example, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method according to the embodiments of the present application when executing the computer program.
There is also provided in one example a computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements a method as one or more of the embodiments of the application.
An embodiment of the application discloses a picture identification method and a picture identification device, and example 1 includes a picture identification method, including:
acquiring content information, wherein the content information comprises a plurality of information elements, and the plurality of information elements comprise at least one picture information element;
extracting characteristic data of information elements in the content information;
determining difference data between the picture information element and other information elements according to the characteristic data;
and determining the characteristic type of the content information according to the characteristic data and the difference data.
Example 2 may include the method of example 1, wherein the determining the feature type of the content information from the feature data and the difference data comprises:
determining the characteristic value of the information element by taking the characteristic data of the information element as input according to a characteristic value recognition model;
and determining the characteristic type of the content information according to the characteristic value and the difference data.
Example 3 may include the method of example 1 and/or example 2, wherein, prior to the determining the feature value of the information element from a feature value recognition model with the feature data of the information element as input, the method further comprises:
and training the characteristic value recognition model by adopting the information element samples and the characteristic types of the corresponding marks.
Example 4 may include the method of one or more of examples 1-3, wherein the content information includes a textual information element, the characterization data includes descriptive information, and determining difference data between the pictorial information element and other information elements based on the characterization data includes:
and determining difference data between the picture information element and the text information element according to the description information of the picture information element and the description information of the text information element.
Example 5 may include the method of one or more of examples 1-4, wherein the determining difference data between the picture information element and other information elements from the feature data comprises:
and comparing the characteristic data among the picture information elements to obtain difference data among the picture information elements.
Example 6 may include the method of one or more of examples 1-5, wherein the determining difference data between the picture information element and other information elements from the feature data comprises:
clustering a plurality of picture information elements according to the characteristic data;
calculating difference data between the clusters of pictures as difference data between the picture information elements.
Example 7 may include the method of one or more of examples 1-6, wherein, prior to the extracting feature data of information elements in the content information, the method further comprises:
searching the associated information of the content information, wherein the associated information comprises at least one of the following: a picture information element, a text information element, a video information element;
and adding the associated information into the content information.
Example 8 may include the method of one or more of examples 1-7, wherein the associated information includes a picture information element, and prior to the locating the associated information of the content information, the method further includes:
and determining that the number of picture information elements in the content information does not meet the preset requirement.
Example 9 may include the method of one or more of examples 1-8, wherein the content information includes at least one of commentary content information, video content information.
Example 10 may include the method of one or more of examples 1-6, wherein when the content information is video content information, before the extracting feature data of information elements in the content information, the method further includes:
extracting a plurality of video frames in the video content information as the picture information element.
Example 11 includes an image recognition apparatus comprising:
the information acquisition module is used for acquiring content information, wherein the content information comprises a plurality of information elements, and the plurality of information elements comprise at least one picture information element;
the data extraction module is used for extracting the characteristic data of the information elements in the content information;
a difference determining module, configured to determine difference data between the picture information element and other information elements according to the feature data;
and the type determining module is used for determining the characteristic type of the content information according to the characteristic data and the difference data.
Example 12 may include the apparatus of example 11, wherein the type determination module comprises:
the characteristic value determining submodule is used for determining the characteristic value of the information element by taking the characteristic data of the information element as input according to a characteristic value recognition model;
and the type determining submodule is used for determining the characteristic type of the content information according to the characteristic value and the difference data.
Example 13 may include the apparatus of example 11 and/or example 12, wherein the apparatus further comprises:
and the training module is used for training the characteristic value recognition model by adopting an information element sample and the characteristic type of the corresponding mark before determining the characteristic value of the information element according to the characteristic value recognition model by taking the characteristic data of the information element as input.
Example 14 may include the apparatus of one or more of examples 11-13, wherein the content information includes a textual information element, the feature data includes descriptive information, and the discrepancy determining module includes:
and the difference determining submodule is used for determining difference data between the picture information element and the text information element according to the description information of the picture information element and the description information of the text information element.
Example 15 may include the apparatus of one or more of examples 11-14, wherein the discrepancy determining module comprises:
and the comparison submodule is used for comparing the characteristic data among the picture information elements to obtain the difference data among the picture information elements.
Example 16 may include the apparatus of one or more of examples 11-15, wherein the discrepancy determining module comprises:
the clustering submodule is used for clustering a plurality of picture information elements according to the characteristic data;
and the calculating sub-module is used for calculating difference data between the picture clusters as the difference data between the picture information elements.
Example 17 may include the apparatus of one or more of examples 11-16, wherein the apparatus further comprises:
a searching module, configured to search, before extracting feature data of an information element in the content information, associated information of the content information, where the associated information includes at least one of: a picture information element, a text information element, a video information element;
and the adding module is used for adding the associated information into the content information.
Example 18 may include the apparatus of one or more of examples 11-17, wherein the association information includes a picture information element, the apparatus further comprising:
and the determining module is used for determining that the number of the picture information elements in the content information does not meet the preset requirement before the associated information of the content information is searched.
Example 19 may include the apparatus of one or more of examples 11-18, wherein the content information includes at least one of commentary content information, video content information.
Example 20 may include the apparatus of one or more of examples 11-19, wherein, when the content information is video content information, the apparatus further comprises:
and the video frame extraction module is used for extracting a plurality of video frames in the video content information as the picture information elements before extracting the characteristic data of the information elements in the content information.
Example 21 includes a computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing a method as in one or more of examples 1-10 when executing the computer program.
Example 22 includes a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements a method as in one or more of examples 1-10.
Although certain examples have been illustrated and described for purposes of description, a wide variety of alternate and/or equivalent implementations, or calculations, may be made to achieve the same objectives without departing from the scope of practice of the present application. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that the embodiments described herein be limited only by the claims and the equivalents thereof.

Claims (12)

1. A picture recognition method is characterized by comprising the following steps:
acquiring content information, wherein the content information comprises a plurality of information elements, and the plurality of information elements comprise at least one picture information element;
extracting characteristic data of information elements in the content information;
determining difference data between the picture information element and other information elements according to the characteristic data;
and determining the characteristic type of the content information according to the characteristic data and the difference data.
2. The method of claim 1, wherein determining the feature type of the content information according to the feature data and the difference data comprises:
determining the characteristic value of the information element by taking the characteristic data of the information element as input according to a characteristic value recognition model;
and determining the characteristic type of the content information according to the characteristic value and the difference data.
3. The method of claim 2, wherein before the determining the feature value of the information element according to a feature value recognition model with the feature data of the information element as input, the method further comprises:
and training the characteristic value recognition model by adopting the information element samples and the characteristic types of the corresponding marks.
4. The method of claim 1, wherein the content information comprises a textual information element, wherein the feature data comprises descriptive information, and wherein determining difference data between the pictorial information element and other information elements based on the feature data comprises:
and determining difference data between the picture information element and the text information element according to the description information of the picture information element and the description information of the text information element.
5. The method of claim 1, wherein determining difference data between the picture information element and other information elements according to the feature data comprises:
and comparing the characteristic data among the picture information elements to obtain difference data among the picture information elements.
6. The method of claim 1, wherein determining difference data between the picture information element and other information elements according to the feature data comprises:
clustering a plurality of picture information elements according to the characteristic data;
calculating difference data between the clusters of pictures as difference data between the picture information elements.
7. The method of claim 1, wherein prior to said extracting feature data of information elements in said content information, said method further comprises:
searching the associated information of the content information, wherein the associated information comprises at least one of the following: a picture information element, a text information element, a video information element;
and adding the associated information into the content information.
8. The method of claim 7, wherein the associated information comprises a picture information element, and prior to the searching for the associated information of the content information, the method further comprises:
and determining that the number of picture information elements in the content information does not meet the preset requirement.
9. The method of claim 1, wherein the content information comprises at least one of comment content information and video content information.
10. The method according to claim 1, wherein when the content information is video content information, before the extracting feature data of information elements in the content information, the method further comprises:
extracting a plurality of video frames in the video content information as the picture information element.
11. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to one or more of claims 1-10 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to one or more of claims 1-10.
CN201910926845.6A 2019-09-27 2019-09-27 Picture identification method, computer equipment and storage medium Pending CN112580674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910926845.6A CN112580674A (en) 2019-09-27 2019-09-27 Picture identification method, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910926845.6A CN112580674A (en) 2019-09-27 2019-09-27 Picture identification method, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112580674A true CN112580674A (en) 2021-03-30

Family

ID=75110533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910926845.6A Pending CN112580674A (en) 2019-09-27 2019-09-27 Picture identification method, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112580674A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743506A (en) * 2021-09-06 2021-12-03 联想(北京)有限公司 Data processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810425A (en) * 2012-11-13 2014-05-21 腾讯科技(深圳)有限公司 Method and device for detecting malicious website
US20160180265A1 (en) * 2014-12-23 2016-06-23 The Travelers Indemnity Company Mobile assessment tool
US20160217343A1 (en) * 2015-01-23 2016-07-28 Highspot, Inc. Systems and methods for identifying semantically and visually related content
CN109492698A (en) * 2018-11-20 2019-03-19 腾讯科技(深圳)有限公司 A kind of method of model training, the method for object detection and relevant apparatus
CN110019790A (en) * 2017-10-09 2019-07-16 阿里巴巴集团控股有限公司 Text identification, text monitoring, data object identification, data processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810425A (en) * 2012-11-13 2014-05-21 腾讯科技(深圳)有限公司 Method and device for detecting malicious website
US20160180265A1 (en) * 2014-12-23 2016-06-23 The Travelers Indemnity Company Mobile assessment tool
US20160217343A1 (en) * 2015-01-23 2016-07-28 Highspot, Inc. Systems and methods for identifying semantically and visually related content
CN110019790A (en) * 2017-10-09 2019-07-16 阿里巴巴集团控股有限公司 Text identification, text monitoring, data object identification, data processing method
CN109492698A (en) * 2018-11-20 2019-03-19 腾讯科技(深圳)有限公司 A kind of method of model training, the method for object detection and relevant apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘楚舒;王伟平;刘鹏飞;: "结合资源特征的Android恶意应用检测方法", 计算机工程与应用, no. 15, 6 July 2017 (2017-07-06) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743506A (en) * 2021-09-06 2021-12-03 联想(北京)有限公司 Data processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
TWI753044B (en) Recommended method and apparatus for video material
US10395120B2 (en) Method, apparatus, and system for identifying objects in video images and displaying information of same
WO2018166288A1 (en) Information presentation method and device
US20160358025A1 (en) Enriching online videos by content detection, searching, and information aggregation
US20160005171A1 (en) Image Analysis Device, Image Analysis System, and Image Analysis Method
US20170168996A1 (en) Systems and methods for web page layout detection
KR102002024B1 (en) Method for processing labeling of object and object management server
US10438089B2 (en) Logo detection video analytics
Yang et al. Spatiotemporal trident networks: detection and localization of object removal tampering in video passive forensics
JP2018081442A (en) Learned model generating method and signal data discrimination device
US20170345153A1 (en) Method, an apparatus and a computer program product for video object segmentation
CN112559800A (en) Method, apparatus, electronic device, medium, and product for processing video
CN112001362A (en) Image analysis method, image analysis device and image analysis system
CN113761253A (en) Video tag determination method, device, equipment and storage medium
US20160027050A1 (en) Method of providing advertisement service using cloud album
CN110363206B (en) Clustering of data objects, data processing and data identification method
CN114417146A (en) Data processing method and device, electronic equipment and storage medium
CN112580674A (en) Picture identification method, computer equipment and storage medium
CN111738252A (en) Method and device for detecting text lines in image and computer system
Yan et al. Video quality assessment based on motion structure partition similarity of spatiotemporal slice images
US11728914B2 (en) Detection device, detection method, and program
CN111784053A (en) Transaction risk detection method, device and readable storage medium
US10631050B2 (en) Determining and correlating visual context on a user device with user behavior using digital content on the user device
WO2023279944A1 (en) Method and computer system for evaluating price of mineral
US20220067345A1 (en) Method and system for identifying, tracking, and collecting data on a person of interest

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination