CN115618271B - Object category identification method, device, equipment and storage medium - Google Patents

Object category identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN115618271B
CN115618271B CN202210478922.8A CN202210478922A CN115618271B CN 115618271 B CN115618271 B CN 115618271B CN 202210478922 A CN202210478922 A CN 202210478922A CN 115618271 B CN115618271 B CN 115618271B
Authority
CN
China
Prior art keywords
classification model
layer
channel
pruning
scaling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210478922.8A
Other languages
Chinese (zh)
Other versions
CN115618271A (en
Inventor
刘文然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210478922.8A priority Critical patent/CN115618271B/en
Publication of CN115618271A publication Critical patent/CN115618271A/en
Application granted granted Critical
Publication of CN115618271B publication Critical patent/CN115618271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an object category identification method, device, equipment and storage medium, which can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, internet of vehicles and the like, and the method comprises the following steps: acquiring target data of a target object; performing category identification processing on the target data based on the object classification model to obtain a target category label of the target object; the object classification model is obtained by carrying out object class identification training on a pruning classification model based on sample data of a sample object, the pruning classification model is a model obtained by pruning a channel to be pruned in an initial classification model, and the channel to be pruned is a channel with an absolute value of a scaling parameter smaller than a preset threshold in the initial classification model; the initial classification model is obtained by performing object class identification training on a preset network based on sample data; the preset network comprises an attention network provided with a scaling layer; the application reduces the operation amount of the object classification model, improves the calculation speed of the model and improves the recognition speed of the object class.

Description

Object category identification method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for identifying object categories.
Background
In the related art, for a neural network including CNN or MLP, unimportant parameters in the network are usually cut out according to the importance of the parameters, so as to achieve the purpose of reducing the model. For the Attention layer, because the parameters in the layer mainly consist of the full connection layer (FC), and the connection modes of the FC layers are different from those of the CNN and the MLP, if the channel pruning is directly performed, the network after pruning cannot perform the calculation correctly.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for identifying object categories, which can improve the identification rate of the object categories.
In one aspect, the present application provides a method for identifying an object class, the method comprising:
acquiring target data of a target object;
performing category identification processing on the target data based on an object classification model to obtain a target category label of the target object; the object classification model is obtained by carrying out object category identification training on a pruning classification model based on sample data of a sample object, the pruning classification model is a model obtained by pruning a channel to be pruned in an initial classification model, and the channel to be pruned is a channel with an absolute value of a scaling parameter smaller than a preset threshold value in the initial classification model; the initial classification model is obtained by performing object class identification training on a preset network based on the sample data; the preset network comprises an updated attention network, wherein the updated attention network is an attention network provided with a scaling layer; the scaling parameters of each channel in the initial classification model are determined based on the scaling layer; the sample data is labeled with a sample class label of the sample object.
Another aspect provides an object class identification apparatus, the apparatus comprising:
the target data acquisition module is used for acquiring target data of a target object;
the target category determining module is used for carrying out category identification processing on the target data based on an object classification model to obtain a target category label of the target object; the object classification model is obtained by carrying out object category identification training on a pruning classification model based on sample data of a sample object, the pruning classification model is a model obtained by pruning a channel to be pruned in an initial classification model, and the channel to be pruned is a channel with an absolute value of a scaling parameter smaller than a preset threshold value in the initial classification model; the initial classification model is obtained by performing object class identification training on a preset network based on the sample data; the preset network comprises an updated attention network, wherein the updated attention network is an attention network provided with a scaling layer; the scaling parameters of each channel in the initial classification model are determined based on the scaling layer; the sample data is labeled with a sample class label of the sample object.
Another aspect provides an object class identification device comprising a processor and a memory having stored therein at least one instruction or at least one program loaded and executed by the processor to implement an object class identification method as described above.
Another aspect provides a computer storage medium storing at least one instruction or at least one program loaded and executed by a processor to implement an object class identification method as described above.
Another aspect provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device executes to implement the object class identification method as described above.
The object category identification method, the device, the equipment and the storage medium provided by the application have the following technical effects:
the method comprises the steps of obtaining target data of a target object; performing category identification processing on the target data based on an object classification model to obtain a target category label of the target object; the object classification model is obtained by carrying out object category identification training on a pruning classification model based on sample data of a sample object, the pruning classification model is a model obtained by pruning a channel to be pruned in an initial classification model, and the channel to be pruned is a channel with an absolute value of a scaling parameter smaller than a preset threshold value in the initial classification model; the initial classification model is obtained by performing object class identification training on a preset network based on the sample data; the preset network comprises an updated attention network, wherein the updated attention network is an attention network provided with a scaling layer; the scaling parameters of each channel in the initial classification model are determined based on the scaling layer; the sample data is marked with a sample category label of the sample object; according to the application, the scaling layer is arranged in the attention network of the preset network, and then the channel to be pruned is determined through the scaling layer parameters, so that pruning in the model containing the attention network is realized, and then the object classification model is further determined according to the pruning classification model, thereby reducing the operation amount of the object classification model, improving the calculation speed of the model and improving the recognition speed of the object class.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an object class identification system according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an object class identification method according to an embodiment of the present application;
FIG. 3 is a flowchart of a training method of an object classification model according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for adding a scaling layer to the original attention network to obtain the updated attention network according to an embodiment of the present application;
FIG. 5 is a flow chart of a method for determining the object classification model based on the initial classification model according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a method for pruning the channel to be pruned in the initial classification model to obtain the pruning classification model according to the embodiment of the present application;
FIG. 7 is a flowchart of a method for performing object class identification training on the pruning classification model based on the sample data to obtain the object classification model according to the embodiment of the present application;
fig. 8 is a schematic structural diagram of a picture classification model according to an embodiment of the present application;
FIG. 9 is a structural comparison diagram of the Attention before and after adding a zoom layer according to an embodiment of the present application;
FIG. 10 is a comparison of the front and rear of channel pruning in an initial classification model according to an embodiment of the present application;
FIG. 11 is a structural comparison diagram of the Attention before and after adding an index pooling layer according to an embodiment of the present application;
FIG. 12 is a flowchart of a method for constructing an object classification model according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an object class identification device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
First, partial terms or terminology appearing in the course of describing the embodiments of the application are explained as follows:
artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. Specifically, the scheme provided by the embodiment of the application relates to the field of machine learning of artificial intelligence. Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence.
The intelligent transportation is a new generation information technology such as the Internet of things, space perception, cloud computing, mobile Internet and the like in the whole transportation field, and the theories and tools such as traffic science, system methods, artificial intelligence, knowledge mining and the like are comprehensively utilized, the comprehensive perception, deep fusion, active service and scientific decision making are taken as targets, and the related data of the transportation are deeply mined by constructing a real-time dynamic information service system to form a problem analysis model, so that the improvement of the industry resource allocation optimizing capability, public decision making capability, industry management capability and public service capability is realized, the transportation is promoted to be safer, more efficient, more convenient, more economical, more environment-friendly and more comfortable to operate and develop, and the transportation related industry is driven to be transformed and upgraded.
The following: the essence of the Attention mechanism is to obtain inspiration from the human visual Attention mechanism. Roughly, when people feel things, the vision is not always that a scene is seen from the head to the tail all at a time, but a specific part is often observed and noted according to the requirement. And when we find that a scene often appears in a part where it is desired to observe, we will learn to pay attention to that part when a similar scene appears again in the future.
Transformer: a neural network module, a model that utilizes an attention mechanism to increase the speed of model training.
ImageNet dataset: one computer vision data set was created by the professor Li Feifei, university of stanford. The dataset included 14,197,122 pictures and 21,841 Syset indexes. Syset is a node in the WordNet hierarchy, which is in turn a set of synonyms. The ImageNet dataset has been the benchmark for evaluating the performance of image classification algorithms.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of an object class identification system according to an embodiment of the present application, and as shown in fig. 1, the object class identification system may at least include a server 01 and a client 02.
Specifically, in the embodiment of the present application, the server 01 may include an independently operating server, or a distributed server, or a server cluster formed by a plurality of servers, and may also be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDN (Content Delivery Network ), and basic cloud computing services such as big data and artificial intelligence platforms. The server 01 may include a network communication unit, a processor, a memory, and the like. In particular, the server 01 may be configured to train an object classification model and determine a class label of the target object based on the object classification model.
Specifically, in the embodiment of the present application, the client 02 may include smart phones, desktop computers, tablet computers, notebook computers, digital assistants, intelligent wearable devices, intelligent sound boxes, vehicle terminals, intelligent televisions, and other types of entity devices, or may include software running in the entity devices, for example, web pages provided by some service providers to users, or may also provide applications provided by the service providers to users. Specifically, the client 02 may be configured to query the category of the target type online.
In the following description, fig. 2 is a schematic flow chart of an object class identification method according to an embodiment of the present application, where the method operation steps described in the examples or the flowcharts are provided, but more or fewer operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may be applied to the server 01 shown in fig. 1, and may include:
s201: target data of a target object is acquired.
In the embodiment of the application, the target object can be objects in different application scenes and different fields, can comprise people, animals, commodities, living goods and the like, and can comprise, but is not limited to, users, shops, addresses, animals, electronic equipment and the like. The target data is data corresponding to the target object, and can represent the attribute of the target object. The target data may include, but is not limited to, characters, text, images, and the like.
In an embodiment of the present application, the acquiring the target data of the target object may include:
and the receiving terminal responds to the object type identification instruction and sends target data.
In the embodiment of the application, the corresponding target data can be acquired through the terminal corresponding to the target object.
S203: performing category identification processing on the target data based on an object classification model to obtain a target category label of the target object; the object classification model is obtained by carrying out object category identification training on a pruning classification model based on sample data of a sample object, the pruning classification model is a model obtained by pruning a channel to be pruned in an initial classification model, and the channel to be pruned is a channel with an absolute value of a scaling parameter smaller than a preset threshold value in the initial classification model; the initial classification model is obtained by performing object class identification training on a preset network based on the sample data; the preset network comprises an updated attention network, wherein the updated attention network is an attention network provided with a scaling layer; the scaling parameters of each channel in the initial classification model are determined based on the scaling layer; the sample data is labeled with a sample class label of the sample object.
In the embodiment of the present application, the sample object and the target object are the same application scene and the same type of object, and the sample data and the target data are the same type of data, for example, if the target data are images, then the sample data are also images. The preset network may be various networks including an Attention network (Attention), for example, the preset network may be a transducer, and may be other types of networks; updating the attention network to an attention network provided with a Scale layer; the preset threshold corresponding to the scaling parameter may be set according to the actual situation, and may be set to a value close to zero.
In the embodiment of the application, an initial classification model is obtained according to preset network training, then the channel to be pruned is determined according to the scaling parameters of each channel corresponding to the scaling layer when the model converges, so that a pruning classification model containing an attention network is determined, and then the model is further trained to obtain an object classification model. The object classification model in the embodiment can be applied to different scenes to classify different objects; the object classification model of the embodiment reduces the calculation amount of the Attention in network deployment, and can be applied to models using the Attention algorithm, such as text classification, picture classification, video classification and the like. For example, in an image classification scene, various images may be classified by an object classification model; in an App scene, the users can be classified according to the associated data of the users and App service indexes; in the advertisement scene, whether the user is interested in the specific advertisement or not can be judged according to the associated data of the user and the advertisement service index.
In a specific embodiment, the transducer is used as a neural network basic module, plays an important role in natural language processing and computer vision tasks, for example, in a picture classification task, a transducer layer can be stacked to build a picture classification network, and finally, the picture classification network is accessed into a classifier to perform picture classification; as shown in fig. 8, fig. 8 is a schematic structural diagram of a picture classification model, where the picture classification model is composed of multiple layers of transformers, and each Transformer includes an improved attribute, so that the computation load of the classification model can be reduced, and the classification speed of the model can be improved.
In some embodiments, as shown in fig. 3, the training method of the object classification model includes:
s301: acquiring an original attention network;
in the embodiment of the present application, the original Attention network may be Attention.
S303: adding a scaling layer in the original attention network to obtain the updated attention network;
in the embodiment of the application, a scaling layer can be added in the Attention so as to obtain an updated Attention network.
In some embodiments, as shown in fig. 4, the scaling layers include a first scaling layer and a second scaling layer, and adding the scaling layer to the original attention network to obtain the updated attention network includes:
S3031: determining a first linear layer, a second linear layer, a first matrix multiplication layer, and a second matrix multiplication layer in the original attention network; the first linear layer is connected with the first matrix multiplication layer, and the second linear layer is connected with the second matrix multiplication layer;
in the embodiment of the present application, as shown in fig. 9, fig. 9 (a) is a schematic structural diagram of the Attention, where the Attention includes a linear layer and a matrix multiplication layer (Mat multiple), and the linear layer includes a first linear layer (corresponding to a value linear transformation matrix, V) and a second linear layer (corresponding to a key linear transformation matrix, K); the matrix multiplication layer comprises a first matrix multiplication layer and a second matrix multiplication layer; the first linear layer is connected with the first matrix multiplication layer, and the second linear layer is connected with the second matrix multiplication layer; the primary attention network further comprises a normalized exponential function layer (Softmax), a third linear layer (corresponding to the query linear transformation matrix, Q), the third linear layer being connected to the second matrix multiplication layer, the second matrix multiplication layer being connected to the normalized exponential function layer, the normalized exponential function layer being connected to the first matrix multiplication layer. The first linear layer is used for determining a content vector of data, and the second linear layer is used for determining a content vector query identifier of the data; the third linear layer is used for determining a query vector of the data; q is the query vector for the word, K is the "checked" vector, and V is the content vector. Wherein Q is the most suitable target to be searched, K is the most suitable target to be searched, V is the content, and the three are not necessarily consistent, so the network sets three vectors in this way, and then learns the most suitable Q, K and V, thereby enhancing the capability of the network.
S3033: and adding the first scaling layer between the first linear layer and the first matrix multiplication layer connection layer, and adding the second scaling layer between the second linear layer and the second matrix multiplication layer connection layer to obtain the updated attention network.
In the embodiment of the application, two scaling layers can be added in the Attention at the same time to obtain the updated Attention network (Scaled Attention); as shown in fig. 9, fig. 9 (b) is a schematic structural diagram of Attention (Scaled Attention) with a scaling layer added.
S305: constructing the preset network based on the updated attention network;
in the embodiment of the application, a preset network can be constructed according to the updated attention network; for example, a transducer may be constructed from Scaled attributes.
S307: according to the sample data of the sample object, performing object category identification training on the preset network to obtain an initial classification model; the initial classification model includes at least two channels;
in the embodiment of the application, the preset network comprises at least two channels, and the number of the initial classification model obtained by training is the same as that of the channels of the preset network.
In some embodiments, the training for object class identification on the preset network according to the sample data of the sample object to obtain an initial classification model may include:
Inputting sample data of the sample object into the preset network to perform object category identification training, and continuously adjusting parameters of the preset network in the training process until an object category label output by the preset network is matched with a labeled object category label;
and taking the preset network corresponding to the parameters when the output object class label is matched with the marked object class label as the initial classification model.
S309: the object classification model is determined based on the initial classification model.
In some embodiments, as shown in fig. 5, the determining the object classification model based on the initial classification model includes:
s3091: acquiring an objective function corresponding to the scaling layer in the initial classification model;
s3093: determining a scaling parameter corresponding to each channel in the initial classification model according to the objective function;
in the embodiment of the application, the scaling parameters corresponding to each channel in the initial classification model can be determined according to the coefficients in the objective function.
S3095: determining a channel to be sheared off based on a scaling parameter corresponding to each channel in the initial classification model; the channel to be sheared off is a channel with the absolute value of the scaling parameter smaller than a preset threshold value;
In the embodiment of the present application, the preset threshold may be set according to actual requirements, for example, the preset threshold may be a value close to 0.
In some embodiments, the determining the channel to be pruned based on the scaling parameter corresponding to each channel in the initial classification model may include:
determining the total number of channels and the proportion of channels to be sheared out in the initial classification model;
in the embodiment of the present application, the proportion of the channels to be pruned is that the channels to be pruned occupy the total number of channels in the model, and may be set according to practical situations, for example, may be set to 10%, 20%, etc.
Determining the number of channels to be sheared off according to the total number of the channels and the proportion of the channels to be sheared off;
in the embodiment of the application, the product of the total number of channels and the proportion of the channels to be cut off can be calculated to obtain the number of the channels to be cut off.
Determining the preset threshold according to the scaling parameters corresponding to each channel in the initial classification model and the number of channels to be sheared;
and determining the identification information of the channel to be sheared off according to the scaling parameters corresponding to each channel in the initial classification model and the preset threshold value.
In the embodiment of the application, after all the attributes in the model are converted into the Scale attributes, training is carried out on the model on the original task, the obtained model is consistent with the original model on the task precision, and then a part of channels are cut off according to the parameters of the Scale layer, and as the Scale layer is closer to 0, the characteristics in the channels are less important, the corresponding channels can be cut off according to the absolute value of the parameters of the Scale layer, for example, 20% of channels are cut off in a preset manner, and then the channel with the 20% of absolute value closest to 0 is selected from the parameters of the Scale layer, namely, the preset threshold can be a value close to 0, the serial number of the channel is recorded, and the serial number of the channel is not considered in the subsequent calculation. In a specific embodiment, as shown in fig. 10, (a) in fig. 10 is a scaling parameter of 5 channels in the initial classification model, and (b) is a scaling parameter of the other channels after two channels are pruned in the model.
In some embodiments, a channel in the model corresponds to a set of feature parameters, the sum of squares of the feature parameters corresponding to each channel can be calculated, the feature parameters of the channel comprise scaling parameters, and then the channel to be pruned is determined according to the sum of squares of the feature parameters corresponding to each channel of the model; for example, a channel whose sum of squares is smaller than a preset value may be determined as a channel to be pruned.
S3097: pruning the channel to be pruned in the initial classification model to obtain the pruning classification model;
in some embodiments, as shown in fig. 6, the pruning classification model is obtained by pruning the channel to be pruned in the initial classification model, and includes:
s30971: determining identification information to be sheared off of the channel to be sheared off;
s30973: adding an index pooling layer after the scaling layer of the initial classification model;
in the embodiment of the application, unimportant channels in K and V are found through a Scale layer, the serial numbers of the channels are recorded, and then an index poll layer is added behind the Scale layer in the Scaled version, namely, the channels which are preserved through poll selection, and the obtained new version is called index poll.
In some embodiments, the index pooling layer includes a first index pooling layer and a second index pooling layer, the adding an index pooling layer after the scaling layer of the initial classification model includes:
Adding a first index pooling layer between the first scaling layer and the first matrix multiplication layer;
a second index pooling layer is added between the second scaling layer and the second matrix multiplication layer.
In a specific embodiment, as shown in fig. 11, (a) in fig. 11 is a schematic structural diagram of Attention (Scaled Attention) with a scaling layer added, and (b) in fig. 11 is a schematic structural diagram of Attention (IndexPoolingAttention) with an index pooling layer added.
S30975: and pruning the channel to be pruned corresponding to the identification information to be pruned from the channels of the initial classification model based on the index pooling layer to obtain the pruning classification model.
In the embodiment of the application, the reserved channel is selected through Pooling, and the obtained new attribute is called IndexPoolingAttention.
S3099: and carrying out object category identification training on the pruning classification model based on the sample data to obtain the object classification model.
In the embodiment of the application, after the ScaldedAttention in the model is converted into the IndexPoolingAttention, the model is retrained.
In the embodiment of the application, the calculated amount of Scale layer and IndexPooling is far smaller than that of matrix multiplication in the Attention, and the calculated amount in matrix multiplication and softmax operation is reduced by clipping some channel characteristics of K and V, so that the pruning aim of reducing the calculated amount and improving the model identification speed is achieved.
In some embodiments, as shown in fig. 7, the training for object class identification on the pruning classification model based on the sample data, to obtain the object classification model, includes:
s30991: acquiring initial model parameters of the initial classification model;
s30993: taking the initial model parameters as initial training parameters of the pruning classification model;
s30995: and carrying out object category identification training on the pruning classification model based on the sample data and the initial training parameters to obtain the object classification model.
In the embodiment of the application, because the initial classification model is trained to be converged, the trained parameters in the initial classification model are loaded into the current pruning classification model and then are trained, so that the convergence speed of the model can be improved, and the accuracy of the model is easy to maintain.
In a specific embodiment, as shown in fig. 12, fig. 12 is a flowchart of a method for constructing an object classification model, including:
the Attention module in the model is added to the Scale layer first. With original Attention model inputAfter passing through the Linear layer, the +.> The original Attention operation can be expressed by equation (1). When input features are received by a Linear layer, they are received in a form flattened into a one-dimensional tensor, which is then multiplied by a weight matrix. This matrix multiplication produces output features.
After passing through the Linear layer, pairAnd->And adding a Scale layer operation, namely multiplying each of N channels of the vector by a parameter, so that the importance of each channel can be obtained through the learning of the parameter in training, and the importance of each channel can be characterized by the absolute value of the parameter. The Attention operation obtained after adding the Scale layer can be represented by formula (2), where S (x) represents the Scale layer.
Because the parameters in the Scale layer can be learned in training, when the parameters of the Scale are all 1, the parameters are equivalent to the original Attention operation, and after learning, the parameters of the corresponding channel of the Scale layer are closer to 0, which means that the importance of the channel is lower, and the channel can be pruned.
Specifically, in the embodiment of the application, compared with a model without pruning, the model after pruning has optimized calculation amount and model reasoning speed. As shown in table 1, deit is an image classification model based on a transducer, and is classified into a Deit-Small model, a Deit-Base model and the like according to the size of the model, and after the method in this embodiment is adopted to prune the original Deit network, the calculated amount of the obtained pruned model is reduced and the reasoning speed is improved on the premise of keeping the accuracy of the original model on an Imagenet data set.
TABLE 1 data comparison Table of the det after pruning and the model before pruning of this example
Model Top1 precision Calculated amount (GFLOPS) Inference time (images/s)
Deit-Small 79.8 4.6 930
Deit-Base 81.8 17.6 290
Deit-Small after pruning 79.5 3.7 1120
Deit-Base after pruning 81.3 14.0 350
As can be seen from the technical solutions provided by the above embodiments of the present application, the embodiments of the present application acquire target data of a target object; performing category identification processing on the target data based on an object classification model to obtain a target category label of the target object; the object classification model is obtained by carrying out object category identification training on a pruning classification model based on sample data of a sample object, the pruning classification model is a model obtained by pruning a channel to be pruned in an initial classification model, and the channel to be pruned is a channel with an absolute value of a scaling parameter smaller than a preset threshold value in the initial classification model; the initial classification model is obtained by performing object class identification training on a preset network based on the sample data; the preset network comprises an updated attention network, wherein the updated attention network is an attention network provided with a scaling layer; the scaling parameters of each channel in the initial classification model are determined based on the scaling layer; the sample data is marked with a sample category label of the sample object; according to the application, the scaling layer is arranged in the attention network of the preset network, and then the channel to be pruned is determined through the scaling layer parameters, so that pruning in the model containing the attention network is realized, and then the object classification model is further determined according to the pruning classification model, thereby reducing the operation amount of the object classification model, improving the calculation speed of the model and improving the recognition speed of the object class.
The embodiment of the application also provides an object category identification device, as shown in fig. 13, which comprises:
a target data acquisition module 1310, configured to acquire target data of a target object;
a target class determining module 1320, configured to perform class identification processing on the target data based on an object classification model, so as to obtain a target class label of the target object; the object classification model is obtained by carrying out object category identification training on a pruning classification model based on sample data of a sample object, the pruning classification model is a model obtained by pruning a channel to be pruned in an initial classification model, and the channel to be pruned is a channel with an absolute value of a scaling parameter smaller than a preset threshold value in the initial classification model; the initial classification model is obtained by performing object class identification training on a preset network based on the sample data; the preset network comprises an updated attention network, wherein the updated attention network is an attention network provided with a scaling layer; the scaling parameters of each channel in the initial classification model are determined based on the scaling layer; the sample data is labeled with a sample class label of the sample object.
In some embodiments, the apparatus may further comprise:
The original attention network acquisition module is used for acquiring an original attention network;
an attention network updating module, configured to add a scaling layer to the original attention network to obtain the updated attention network;
a preset network construction module, configured to construct the preset network based on the updated attention network;
the initial classification model determining module is used for carrying out object category identification training on the preset network according to the sample data of the sample object to obtain an initial classification model; the initial classification model includes at least two channels;
and the object classification model determining module is used for determining the object classification model based on the initial classification model.
In some embodiments, the object classification model determination module comprises:
the objective function obtaining unit is used for obtaining an objective function corresponding to the scaling layer in the initial classification model;
the scaling parameter determining unit is used for determining scaling parameters corresponding to each channel in the initial classification model according to the objective function;
the channel to be sheared off determining unit is used for determining the channel to be sheared off based on the scaling parameters corresponding to each channel in the initial classification model; the channel to be sheared off is a channel with the absolute value of the scaling parameter smaller than a preset threshold value;
The pruning classification model determining unit is used for pruning the channel to be pruned in the initial classification model to obtain the pruning classification model;
and the object classification model determining unit is used for carrying out object classification recognition training on the pruning classification model based on the sample data to obtain the object classification model.
In some embodiments, the scaling layer comprises a first scaling layer and a second scaling layer, and the attention network update module may comprise:
a network layer determining unit, configured to determine a first linear layer, a second linear layer, a first matrix multiplication layer, and a second matrix multiplication layer in the original attention network; the first linear layer is connected with the first matrix multiplication layer, and the second linear layer is connected with the second matrix multiplication layer; the first linear layer is used for determining a content vector of data, and the second linear layer is used for determining a content vector query identifier of the data;
and a scaling layer adding unit, configured to add the first scaling layer between the first linear layer and the first matrix multiplication layer connection layer, and add the second scaling layer between the second linear layer and the second matrix multiplication layer connection layer, so as to obtain the updated attention network.
In some embodiments, the pruning classification model determination unit may include:
the identification information to be sheared off determining subunit is used for determining identification information to be sheared off of the channel to be sheared off;
an index pooling layer adding subunit, configured to add an index pooling layer after the scaling layer of the initial classification model;
and the channel pruning subunit is used for pruning the channel to be pruned corresponding to the identification information to be pruned from the channels of the initial classification model based on the index pooling layer to obtain the pruning classification model.
In some embodiments, the index pooling layer includes a first index pooling layer and a second index pooling layer, and the apparatus may further include:
in some embodiments, the index pooling layer adding subunit may include:
a first adding subunit for adding a first index pooling layer between the first scaling layer and the first matrix multiplication layer;
a second adding subunit for adding a second index pooling layer between the second scaling layer and the second matrix multiplication layer.
In some embodiments, the object classification model determination unit may include:
an initial model parameter determining subunit, configured to obtain initial model parameters of the initial classification model;
An initial training parameter determining subunit, configured to use the initial model parameter as an initial training parameter of the pruning classification model;
and the object classification model determining subunit is used for carrying out object classification recognition training on the pruning classification model based on the sample data and the initial training parameters to obtain the object classification model.
The device and method embodiments in the device embodiments described are based on the same inventive concept.
The embodiment of the application provides object class identification equipment, which comprises a processor and a memory, wherein at least one instruction or at least one section of program is stored in the memory, and the at least one instruction or the at least one section of program is loaded and executed by the processor to realize the object class identification method provided by the embodiment of the method.
Embodiments of the present application also provide a computer storage medium that may be provided in a terminal to store at least one instruction or at least one program related to implementing an object class identification method in a method embodiment, where the at least one instruction or at least one program is loaded and executed by the processor to implement the object class identification method provided in the method embodiment.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes to implement the object class identification method provided by the above-mentioned method embodiment.
Alternatively, in an embodiment of the present application, the storage medium may be located on at least one network server of a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The memory according to the embodiments of the present application may be used to store software programs and modules, and the processor executes the software programs and modules stored in the memory, thereby performing various functional applications and data processing. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
The object class identification method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal, a server or similar computing devices. Taking the operation on a server as an example, fig. 14 is a block diagram of a hardware structure of a server of an object class identification method according to an embodiment of the present application. As shown in fig. 14, the server 1400 may vary considerably in configuration or performance and may include one or more central processing units (Central Processing Units, CPU) 1410 (the central processing unit 1410 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 1430 for storing data, one or more storage mediums 1420 (e.g., one or more mass storage devices) storing applications 1423 or data 1422. Wherein the memory 1430 and the storage medium 1420 may be transitory or persistent storage. The program stored on the storage medium 1420 may include one or more modules, each of which may include a series of instruction operations on a server. Still further, the central processor 1410 may be configured to communicate with a storage medium 1420, and execute a series of instruction operations in the storage medium 1420 on the server 1400. The server 1400 may also include one or more power supplies 1460, one or more wired or wireless network interfaces 1450, one or more input/output interfaces 1440, and/or one or more operating systems 1421, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
Input-output interface 1440 may be used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the server 1400. In one example, input/output interface 1440 includes a network adapter (Network Interface Controller, NIC) that may connect to other network devices through a base station to communicate with the internet. In one example, the input-output interface 1440 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 14 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, server 1400 may also include more or fewer components than shown in fig. 14, or have a different configuration than shown in fig. 14.
As can be seen from the above embodiments of the object class identification method, apparatus, device or storage medium provided by the present application, the present application obtains target data of a target object; performing category identification processing on the target data based on an object classification model to obtain a target category label of the target object; the object classification model is obtained by carrying out object category identification training on a pruning classification model based on sample data of a sample object, the pruning classification model is a model obtained by pruning a channel to be pruned in an initial classification model, and the channel to be pruned is a channel with an absolute value of a scaling parameter smaller than a preset threshold value in the initial classification model; the initial classification model is obtained by performing object class identification training on a preset network based on the sample data; the preset network comprises an updated attention network, wherein the updated attention network is an attention network provided with a scaling layer; the scaling parameters of each channel in the initial classification model are determined based on the scaling layer; the sample data is marked with a sample category label of the sample object; according to the application, the scaling layer is arranged in the attention network of the preset network, and then the channel to be pruned is determined through the scaling layer parameters, so that pruning in the model containing the attention network is realized, and then the object classification model is further determined according to the pruning classification model, thereby reducing the operation amount of the object classification model, improving the calculation speed of the model and improving the recognition speed of the object class.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus, device, storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (8)

1. An image category recognition method, wherein the method is applied to a server, and the method comprises:
acquiring target attribute information of a target image;
performing category identification processing on the target attribute information based on an image classification model to obtain a target category label of the target image; the image classification model is obtained by carrying out image category identification training on a pruning classification model based on sample attribute information of a sample image, and the pruning classification model is obtained by pruning a channel to be pruned in the initial classification model;
the training method of the image classification model comprises the following steps:
acquiring an original attention network, adding a scaling layer into the original attention network, and obtaining an updated attention network;
constructing a preset network based on the updated attention network;
according to the sample attribute information of the sample image, performing image category identification training on the preset network to obtain an initial classification model; the initial classification model includes at least two channels;
Acquiring an objective function corresponding to the scaling layer in the initial classification model;
determining a scaling parameter corresponding to each channel in the initial classification model according to the objective function; each channel of the initial classification model corresponds to a set of attribute feature parameters, wherein the attribute feature parameters comprise the scaling parameters, and the scaling parameters represent the importance degree of the features in the channel; the attribute characteristic parameters corresponding to each channel are obtained based on characteristic extraction processing of the sample attribute information;
determining a channel to be sheared off based on a scaling parameter corresponding to each channel in the initial classification model; the channel to be sheared off is a channel with the absolute value of the scaling parameter smaller than a preset threshold value;
pruning the channel to be pruned in the initial classification model to obtain the pruning classification model;
performing image category recognition training on the pruning classification model based on the sample attribute information to obtain the image classification model; the sample attribute information is marked with a sample category label of the sample image.
2. The method of claim 1, wherein the scaling layer comprises a first scaling layer and a second scaling layer, wherein adding a scaling layer to the original attention network results in the updated attention network, comprising:
Determining a first linear layer, a second linear layer, a first matrix multiplication layer, and a second matrix multiplication layer in the original attention network; the first linear layer is connected with the first matrix multiplication layer, and the second linear layer is connected with the second matrix multiplication layer; the first linear layer is used for determining a content vector of the attribute information, and the second linear layer is used for determining a content vector query identifier of the attribute information;
and adding the first scaling layer between the first linear layer and the first matrix multiplication layer connection layer, and adding the second scaling layer between the second linear layer and the second matrix multiplication layer connection layer to obtain the updated attention network.
3. The method of claim 1, wherein pruning the channel to be pruned in the initial classification model to obtain the pruned classification model comprises:
determining identification information to be sheared off of the channel to be sheared off;
adding an index pooling layer after the scaling layer of the initial classification model;
and pruning the channel to be pruned corresponding to the identification information to be pruned from the channels of the initial classification model based on the index pooling layer to obtain the pruning classification model.
4. The method of claim 3, wherein the index pooling layer comprises a first index pooling layer and a second index pooling layer, the adding an index pooling layer after the scaling layer of the initial classification model comprising:
adding a first index pooling layer between the first scaling layer and the first matrix multiplication layer;
a second index pooling layer is added between the second scaling layer and the second matrix multiplication layer.
5. The method according to claim 1, wherein the training for image class identification on the pruning classification model based on the sample attribute information to obtain the image classification model includes:
acquiring initial model parameters of the initial classification model;
taking the initial model parameters as initial training parameters of the pruning classification model;
and carrying out image category identification training on the pruning classification model based on the sample attribute information and the initial training parameters to obtain the image classification model.
6. An image category recognition device, the device comprising:
the target attribute information acquisition module is used for acquiring target attribute information of a target image;
the target category determining module is used for carrying out category identification processing on the target attribute information based on an image classification model to obtain a target category label of the target image; the image classification model is obtained by carrying out image category identification training on a pruning classification model based on sample attribute information of a sample image, and the pruning classification model is obtained by pruning a channel to be pruned in the initial classification model;
The training method of the image classification model comprises the following steps:
acquiring an original attention network, adding a scaling layer into the original attention network, and obtaining an updated attention network;
constructing a preset network based on the updated attention network;
according to the sample attribute information of the sample image, performing image category identification training on the preset network to obtain an initial classification model; the initial classification model includes at least two channels;
acquiring an objective function corresponding to the scaling layer in the initial classification model;
determining a scaling parameter corresponding to each channel in the initial classification model according to the objective function; each channel of the initial classification model corresponds to a set of attribute feature parameters, wherein the attribute feature parameters comprise the scaling parameters, and the scaling parameters represent the importance degree of the features in the channel; the attribute characteristic parameters corresponding to each channel are obtained based on characteristic extraction processing of the sample attribute information;
determining a channel to be sheared off based on a scaling parameter corresponding to each channel in the initial classification model; the channel to be sheared off is a channel with the absolute value of the scaling parameter smaller than a preset threshold value;
Pruning the channel to be pruned in the initial classification model to obtain the pruning classification model;
performing image category recognition training on the pruning classification model based on the sample attribute information to obtain the image classification model; the sample attribute information is marked with a sample category label of the sample image.
7. An image class identification device, characterized in that the device comprises a processor and a memory, in which at least one instruction or at least one program is stored, which at least one instruction or at least one program is loaded and executed by the processor to implement the image class identification method according to any of claims 1-5.
8. A computer storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the image class identification method of any of claims 1-5.
CN202210478922.8A 2022-05-05 2022-05-05 Object category identification method, device, equipment and storage medium Active CN115618271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210478922.8A CN115618271B (en) 2022-05-05 2022-05-05 Object category identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210478922.8A CN115618271B (en) 2022-05-05 2022-05-05 Object category identification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115618271A CN115618271A (en) 2023-01-17
CN115618271B true CN115618271B (en) 2023-11-17

Family

ID=84856723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210478922.8A Active CN115618271B (en) 2022-05-05 2022-05-05 Object category identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115618271B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116662814B (en) * 2023-07-28 2023-10-31 腾讯科技(深圳)有限公司 Object intention prediction method, device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308019A (en) * 2020-11-19 2021-02-02 中国人民解放军国防科技大学 SAR ship target detection method based on network pruning and knowledge distillation
CN112668630A (en) * 2020-12-24 2021-04-16 华中师范大学 Lightweight image classification method, system and equipment based on model pruning
CN113011308A (en) * 2021-03-15 2021-06-22 山东大学 Pedestrian detection method introducing attention mechanism
CN113065558A (en) * 2021-04-21 2021-07-02 浙江工业大学 Lightweight small target detection method combined with attention mechanism
CN114048774A (en) * 2021-11-10 2022-02-15 厦门大学 Se-block-based resnet communication radiation source identification method and system
CN114120205A (en) * 2021-12-02 2022-03-01 云南电网有限责任公司信息中心 Target detection and image recognition method for safety belt fastening of distribution network operators
CN114332620A (en) * 2021-12-30 2022-04-12 杭州电子科技大学 Airborne image vehicle target identification method based on feature fusion and attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220036194A1 (en) * 2021-10-18 2022-02-03 Intel Corporation Deep neural network optimization system for machine learning model scaling

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308019A (en) * 2020-11-19 2021-02-02 中国人民解放军国防科技大学 SAR ship target detection method based on network pruning and knowledge distillation
CN112668630A (en) * 2020-12-24 2021-04-16 华中师范大学 Lightweight image classification method, system and equipment based on model pruning
CN113011308A (en) * 2021-03-15 2021-06-22 山东大学 Pedestrian detection method introducing attention mechanism
CN113065558A (en) * 2021-04-21 2021-07-02 浙江工业大学 Lightweight small target detection method combined with attention mechanism
CN114048774A (en) * 2021-11-10 2022-02-15 厦门大学 Se-block-based resnet communication radiation source identification method and system
CN114120205A (en) * 2021-12-02 2022-03-01 云南电网有限责任公司信息中心 Target detection and image recognition method for safety belt fastening of distribution network operators
CN114332620A (en) * 2021-12-30 2022-04-12 杭州电子科技大学 Airborne image vehicle target identification method based on feature fusion and attention mechanism

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A novel channel pruning method for deep neural network compression;Yiming Hu等;《arXiv》;1-10 *
Pay Less Attention with Lightweight and Dynamic Convolutions;FelixWu等;《arXiv》;1-14 *
基于剪枝网络的知识蒸馏对遥感卫星图像分类方法;杨宏炳等;《计算机应用研究》;第38卷(第8期);2469-2473 *
基于结构化剪枝的神经网络模型压缩算法研究;申卓;《中国优秀硕士学位论文全文数据库 信息科技辑》(第01期);I138-2536 *

Also Published As

Publication number Publication date
CN115618271A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
US20230009814A1 (en) Method for training information recommendation model and related apparatus
CN105608477B (en) Method and system for matching portrait with job position
CN107688605B (en) Cross-platform data matching process, device, computer equipment and storage medium
CN107895277A (en) Method, electronic installation and the medium of push loan advertisement in the application
CN101305368A (en) Semantic visual search engine
CN110765882B (en) Video tag determination method, device, server and storage medium
CN113254804B (en) Social relationship recommendation method and system based on user attributes and behavior characteristics
WO2021155691A1 (en) User portrait generating method and apparatus, storage medium, and device
CN110580516B (en) Interaction method and device based on intelligent robot
CN110020022B (en) Data processing method, device, equipment and readable storage medium
CN113761153A (en) Question and answer processing method and device based on picture, readable medium and electronic equipment
CN115618271B (en) Object category identification method, device, equipment and storage medium
CN113641797A (en) Data processing method, device, equipment, storage medium and computer program product
CN110795558B (en) Label acquisition method and device, storage medium and electronic device
CN112131261A (en) Community query method and device based on community network and computer equipment
CN111708890A (en) Search term determining method and related device
CN110162769B (en) Text theme output method and device, storage medium and electronic device
CN116205700A (en) Recommendation method and device for target product, computer equipment and storage medium
CN112148994B (en) Information push effect evaluation method and device, electronic equipment and storage medium
CN117726884A (en) Training method of object class identification model, object class identification method and device
CN113569118A (en) Self-media pushing method and device, computer equipment and storage medium
CN110516153B (en) Intelligent video pushing method and device, storage medium and electronic device
CN110598127B (en) Group recommendation method and device
CN111191065A (en) Homologous image determining method and device
CN116957128A (en) Service index prediction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant