CN110489585A - Distributed image searching method based on supervised learning - Google Patents

Distributed image searching method based on supervised learning Download PDF

Info

Publication number
CN110489585A
CN110489585A CN201910609588.3A CN201910609588A CN110489585A CN 110489585 A CN110489585 A CN 110489585A CN 201910609588 A CN201910609588 A CN 201910609588A CN 110489585 A CN110489585 A CN 110489585A
Authority
CN
China
Prior art keywords
matrix
node
classification
hash codes
supervised learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910609588.3A
Other languages
Chinese (zh)
Other versions
CN110489585B (en
Inventor
胡海峰
熊键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201910609588.3A priority Critical patent/CN110489585B/en
Publication of CN110489585A publication Critical patent/CN110489585A/en
Application granted granted Critical
Publication of CN110489585B publication Critical patent/CN110489585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Abstract

The invention discloses the distributed image searching methods based on supervised learning, classification marker is carried out to image, video, file first in the database of each node, initialize classification matrix, encoder matrix, Hash codes matrix and corresponding Lagrange multiplier, then it introduces and minimizes error in classification and reconstructed error building objective function, solve above-mentioned objective function, undated parameter matrix;Back end is communicated with central node, and judges whether the transition matrix of each node reaches unanimity, and updates Lagrange multiplier, finally carries out approximation search process;The problem of present invention solves large-scale data and is storing, and required scale is excessive when calculating, and centralization ground training algorithm model has been no longer appropriate for;And back end does not exchange raw information with central node communication, can effectively solve transmission and communicates excessive problem, while the data on node keep independence.

Description

Distributed image searching method based on supervised learning
Technical field
The present invention relates to a kind of image search method, specifically a kind of distributed image searching method belongs to machine Learning areas.
Background technique
With the continuous development of social networks, e-commerce, mobile Internet etc., data need the scale for storing, handling Increasing, one-of-a-kind system has been unable to satisfy growing demand.The Internet companies such as Google, Alibaba are successfully expedited the emergence of Cloud computing and this two big hot topics field of big data, cloud computing and big data are all the applications constructed on distributed storage. The core of cloud storage is the large-scale distributed storage system of rear end, and big data not only needs to store the data of magnanimity, also to be led to It crosses suitable frame and tool to analyze these data, obtains wherein useful part, if without distributed storage It is just far from being and big data is analyzed.Although many years have been carried out in the research of distributed system, until in recent years, mutually The rise of networking big data applies distributed system on a large scale in engineering practice.Distributed system is to utilize more Collaborative computer solves calculating, the storage problem that single computer cann't be solved, and distributed system is maximum with one-of-a-kind system Difference is that the scale of problem.The system that it is made of multiple nodes, often will be on a server or server One process is known as a node, these nodes are generally not isolated, but are communicated with each other by network, transmits information. In addition, smart phone stores the letter such as a large amount of picture, text and video due to the fast development of the mobile terminals such as smart phone Breath, smart phone also can be regarded as an independent node, divided between smart phone by base station or mutual pass through Cloth cooperates to improve data-handling capacity.
Supervised learning (Supervised learning), is the algorithm in a kind of machine learning, can be by training data A mode is acquired or establishes, and mode speculates new example according to this.Training data be by input object (usually vector) and Anticipated output is formed.The output of function can be a continuous value (referred to as regression analysis), or one contingency table of prediction Label.There are also a major class algorithm in machine learning, it is called unsupervised learning (Unsupervised learning), is directly to not having Markd training data carries out modeling study, notices that data herein are that do not have markd data, most with supervised learning Basic difference is that have one, label be not no label to data one of modeling.Compared to unsupervised learning, supervised learning Advantage can exactly make full use of known mark information, merge more information into the model of building, effectively increase model Reliability.
In addition, the development of the widely available and multimedia technology with internet, the data of all trades and professions sharply increase, it is existing Have to handle huge database for information technology infrastructure.In fact, compared with carrying cost, in large scale database Middle retrieval related content is the task of a more challenge, especially in searching multimedia data, such as audio, image and view The retrieval of frequency content is the having more challenge of the task.Traditional nearest neighbor algorithm is in processing large-scale image search problem When, up to thousands of dimensions, " dimension disaster " so will lead to memory space consumption greatly and examine the characteristic dimension of sample data The slow-footed problem of rope.In recent years, hash algorithm can satisfy extensive inspection as a kind of representative nearest _neighbor retrieval technology To the particular/special requirement of memory space and retrieval time in rope.The purpose of hash algorithm is expressed as image as one group of regular length Binary-coding, i.e. Hash codes, usually used -1/1 or 0/1 indicates bit therein.Hash algorithm solves conventional retrieval Problem for mass data storage space and retrieval time unwarranted demand so that it is for memory space and retrieval time Demand be greatly lowered, while having and can obtain good retrieval effectiveness, therefore, it has become the sharp sword of processing big data problem, Receive the extensive concern of computer vision field.However current most of hash algorithms be all it is centralized, there are single sections The problems such as computationally intensive are put, it is an interesting problem that hash algorithm how is applied in distributed scene.
In conclusion not having still for how to search for problem using supervision hash algorithm realization distributed image in the prior art There is disclosed disclosure.
Summary of the invention
The purpose of the present invention is to provide a kind of distributed image searching method based on supervised learning, is mainly used for solving Image, video, text equal samples number is big, can not accurately find semantic neighbour, if concentrating in together training, transmission quantity and The excessive problem of calculation amount, the main purpose of this method are to be obtained by distribution training with the training of lower computing cost complete The encoder matrix of office's optimization, while the data independence of each node in distributed training is protected, and realize the neighbour of query sample Search.
The present invention provides a kind of distributed image searching method based on supervised learning, comprising the following steps:
Step 1: classification marker being carried out to image, video, file in the database of each node;
Step 2: initialization classification matrix, encoder matrix, Hash codes matrix and corresponding Lagrange multiplier;
Step 3: introducing and minimize error in classification and reconstructed error building objective function;
Step 4: solving above-mentioned objective function, update classification matrix, encoder matrix, Hash codes matrix;
Step 5: back end is communicated with central node, judges whether the transition matrix of each node reaches unanimity, more New Lagrange multiplier;
Step 6: carrying out approximation search process.
It is further limited as of the invention, in step 1, it is assumed that share N number of node, the corresponding database of each node Xi, XiThe database for indicating i-th of node, the database in different nodes are independent from each other, and between different nodes not Wish shared information, has c kind category label in each database, different labels is stamped to different samples.
It is further limited as of the invention, in step 2, on each node all to classification matrix, encoder matrix, Hash Code matrix and corresponding Lagrange multiplier carry out Initialize installation, and setting initialization classification matrix is the unit square of d × r dimension Battle array, corresponding initialization Lagrange multiplier are the full 0 matrix of d × r dimension, and initialization classification matrix is the unit square of r × c dimension Battle array, corresponding initialization Lagrange multiplier are the full 0 matrix of r × c dimension, and initialization Hash codes matrix is each member of r × n dimension The matrix that plain absolute value is 1, d indicate the dimension of sample original feature space, r presentation code digit, and c indicates class number, n table Show number of samples.
It is further limited as of the invention, in step 3, is introduced in objective function and minimize error in classification and reconstruct mistake Original feature space is mapped to Hash codes by encoder matrix, so that the classification accuracy based on Hash codes is as far as possible by difference Height increases an orthogonality constraint to guarantee the validity of Hash codes, while in order to reduce the correlation between Hash codes, in order to Lower quantization error increases a discrete constraint, i.e. pressure Hash codes are equal to 1 or -1.
The objective function of building is successively as follows:
X in above formulaiIndicate the sample of i-th of node, i.e. database Xi,Ci、BiRespectively indicate the coding square of i-th of node Battle array and Hash codes matrix, ΠiIndicate dual variable, ρ is Lagrange multiplier, and Z is the global parameter that consistency introduces, in above formula Constraint consists of two parts, and first part is the global coherency constraint of alternating direction multipliers method (ADMM), and second constraint is The mutually independent constraint of Hash codes:
Y in above formulaiIndicate the sample labeling of i-th of node, Wi、BiRespectively indicate the classification matrix and Hash of i-th of node Code matrix, λ are Lagrange multipliers, and U is the global parameter that alternating direction multipliers method (ADMM) consistency introduces, and constraint is global Consistency constraint.
Y in above formulaiIndicate the sample labeling of i-th of node, Wi、Bi、CiIt respectively indicates the classification matrix of i-th of node, breathe out Uncommon code matrix and encoder matrix, v are tradeoff parameters, and increased constraint is in order to which each median for guaranteeing Hash codes is discrete 's.
Further limited as of the invention, in step 4, when solving encoder matrix C, due to be related to solve one The problem of trace of a matrix is minimized under conditions of orthogonality constraint needs to utilize Singular-value Decomposition Solution.
It is further limited as of the invention, in step 5, distributed optimization encoder matrix C and when classification matrix W, in addition to N Except a back end, there are one central nodes, for updated to W the and C overall situation, and central node and back end Between Transfer Parameters information, to guarantee the consistency of parameter.
It is further limited as of the invention, in step 6, it is broadcast to all sections by the query sample new for one Point calculates the Hamming distance between new samples and node sample after encoder matrix maps, before taking wherein it is k the smallest away from The result searched for from corresponding sample as approximation.
The invention adopts the above technical scheme compared with prior art, has following technical effect that
1, excessively unilateral when solving many traditional methods for neighbor search, it does not account for marking when finding neighbour Information, pilot process do not carry out sliding-model control, so that these algorithms performance in the practical application of proximity search is bad Problem;
2, it solving large-scale data storing, required scale is excessive when calculating, exceed single calculate node computing capability, The problem of centralized ground training algorithm model has been no longer appropriate for;
3, the communication between node is carried out using parameter matrix, the communication between node does not exchange raw information, can effectively solve Transmission communicates excessive problem, while realizing good performance.
Detailed description of the invention
Fig. 1 is the system framework figure of this method.
Fig. 2 is the distribution training flow chart of this method.
Fig. 3 is the neighbor search flow chart of this method.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawing:
The system framework figure of this method as shown in Figure 1, entire method process can be divided into distributed training process with it is approximate Property search process, detailed process difference it is as shown in Figures 2 and 3, wherein the first step to the 5th step by the way of as shown in Figure 2, 6th step is by the way of as shown in Figure 3.
The first step carries out classification marker to image, video, file etc. in the database of each node.
Assuming that N number of node is shared, the corresponding database X of each nodei, XiThe database for indicating i-th of node, not It is independent from each other with the database in node, and is not intended to shared information between different nodes, there is n in each database Sample, while having c kind category label in each database, different labels is stamped to different samples.
Second step initializes classification matrix, encoder matrix and Lagrange multiplier matrix, initializes Hash codes matrix.
Classification matrix, encoder matrix and corresponding Lagrange multiplier are initialized in each node, initialize Hash codes The initialization classification matrix C of i-th of node is arranged in matrixiIt is the unit matrix of d × r dimension, when i-th of node optimization C is corresponding The full 0 matrix that Lagrange multiplier is d × r dimension is initialized, classification matrix W is initializediIt is the unit matrix of r × c dimension, optimizes W Corresponding initialization Lagrange multiplier is the full 0 matrix of r × c dimension, and initialization Hash codes matrix B is each element of r × n dimension The matrix that absolute value is 1.D indicates the dimension of sample original feature space, r presentation code digit, and c indicates that class number, n indicate Number of samples.The transition matrix of all nodes and the initialization value of Lagrange multiplier are all.
Third step introduces and minimizes error in classification and reconstructed error and discretization and orthogonality constraint building objective function.
It should be noted that the utilization for focusing on obtaining encoder matrix and encoder matrix of the invention, so no longer Emphasis lists original objective function, and to the specific optimization process of objective function, after only giving objective function optimization As a result;The optimization C of i-th of node buildingiObjective function afterwards is as follows:
X in above formulaiIndicate the sample of i-th of node, Ci、BiRespectively indicate the encoder matrix and Hash codes square of i-th of node Battle array, ρ is Lagrange multiplier, ΠiIndicate dual variable, Z is the global ginseng that alternating direction multipliers method (ADMM) consistency introduces It counting, constrains and consist of two parts in above formula, first part is that the global coherency of alternating direction multipliers method (ADMM) constrains, second A constraint is the mutually independent constraint of Hash codes.
The optimization W of i-th of node buildingiObjective function afterwards is as follows:
Y in above formulaiIndicate the sample labeling of i-th of node, Wi、BiRespectively indicate the classification matrix and Hash of i-th of node Code matrix, λ are Lagrange multipliers, and U is the global parameter that alternating direction multipliers method (ADMM) consistency introduces, and constraint is global Consistency constraint.
The optimization B of i-th of node buildingiObjective function afterwards is as follows:
Y in above formulaiIndicate the sample labeling of i-th of node, Wi、Bi、CiIt respectively indicates the classification matrix of i-th of node, breathe out Uncommon code matrix and encoder matrix, v are tradeoff parameters, and increased constraint is in order to which each median for guaranteeing Hash codes is discrete 's.
4th step solves objective function, undated parameter matrix.
Three objective functions above are solved respectively, optimize Wi,CiIt is all made of alternating direction multipliers method (ADMM) progress It solves, optimizes BiDirectly solved using the data information of each node, it is fully distributed when optimizing between node.
5th step, back end are communicated with central node, judge whether the transition matrix of each node reaches unanimity, more New Lagrange multiplier.
Back end and global node, which carry out communicating the parameter matrix for referring to that each node calculates oneself, to be passed to Central node carries out global optimization by central node, and then the parameter matrix of global optimization is traveled to each data section Point can guarantee that training process meets the thought of consistency by the transmitting of this parameter to carry out next iteration update.
If the transition matrix of all nodes tends not to unanimously, iteration updates Lagrange multiplier, and repeats third step.
6th step, approximation search process.
Query sample x new for onec, it is input to all distributed nodes, it is assumed that enter i-th of section Point, the encoder matrix C obtained using distributed training processi, to query sample xcIt is encoded with back end sample, then Calculate xcFrom their Hamming distance (i.e. the corresponding position of Hash codes different quantity);Obtain query sample xcWith its in each node After the distance of his sample, these distances are ranked up, k the smallest distances before obtaining, corresponding to preceding k the smallest distances Sample is the result that our approximations are searched for.
The above, the only specific embodiment in the present invention, but scope of protection of the present invention is not limited thereto, appoints What is familiar with the people of the technology within the technical scope disclosed by the invention, it will be appreciated that expects transforms or replaces, and should all cover Within scope of the invention, therefore, the scope of protection of the invention shall be subject to the scope of protection specified in the patent claim.

Claims (7)

1. a kind of distributed image searching method based on supervised learning, which comprises the following steps:
Step 1: classification marker being carried out to image, video, file in the database of each node;
Step 2: initialization classification matrix, encoder matrix, Hash codes matrix and corresponding Lagrange multiplier;
Step 3: introducing and minimize error in classification and reconstructed error building objective function;
Step 4: solving above-mentioned objective function, update classification matrix, encoder matrix, Hash codes matrix;
Step 5: back end is communicated with central node, judges whether the transition matrix of each node reaches unanimity, and is updated and is drawn Ge Lang multiplier;
Step 6: carrying out approximation search process.
2. the distributed image searching method according to claim 1 based on supervised learning, which is characterized in that in step 1, Assuming that N number of node is shared, the corresponding database X of each nodei, XiThe database for indicating i-th of node, in different nodes Database be independent from each other, and be not intended to shared information between different nodes, there is c kind classification mark in each database Note, different labels is stamped to different samples.
3. the distributed image searching method according to claim 2 based on supervised learning, which is characterized in that in step 2, All are carried out by initialization and is set for classification matrix, encoder matrix, Hash codes matrix and corresponding Lagrange multiplier on each node It sets, setting initialization classification matrix is the unit matrix of d × r dimension, and corresponding initialization Lagrange multiplier is the full 0 of d × r dimension Matrix, initialization classification matrix are the unit matrixs of r × c dimension, and corresponding initialization Lagrange multiplier is the full 0 square of r × c dimension Battle array, initialization Hash codes matrix are the matrixes that each element absolute value of r × n dimension is 1, and d indicates sample original feature space Dimension, r presentation code digit, c indicate that class number, n indicate number of samples.
4. the distributed image searching method according to claim 3 based on supervised learning, which is characterized in that in step 3, It is introduced in objective function and minimizes error in classification and reconstructed error, original feature space is mapped to by Kazakhstan by encoder matrix Uncommon code so that the classification accuracy based on Hash codes is as high as possible, to guarantee the validity of Hash codes, the objective function of building according to It is secondary as follows:
X in above formulaiIndicate the sample of i-th of node, i.e., above-mentioned database Xi, Ci、BiRespectively indicate the volume of i-th of node Code matrix and Hash codes matrix, ΠiIndicating dual variable, ρ is Lagrange multiplier, and Z is the global parameter that consistency introduces, on Constraint consists of two parts in formula, and first part is the global coherency constraint of alternating direction multipliers method (ADMM), and second about Beam is the mutually independent constraint of Hash codes:
Y in above formulaiIndicate the sample labeling of i-th of node, Wi、BiRespectively indicate the classification matrix and Hash codes square of i-th of node Battle array, λ are Lagrange multipliers, and U is the global parameter that alternating direction multipliers method (ADMM) consistency introduces, and constraint is globally consistent Property constraint.
Y in above formulaiIndicate the sample labeling of i-th of node, Wi、Bi、CiRespectively indicate the classification matrix, Hash codes of i-th of node Matrix and encoder matrix, v are tradeoff parameters, and increased constraint is in order to which each median for guaranteeing Hash codes is discrete.
5. the distributed image searching method according to claim 4 based on supervised learning, which is characterized in that in step 4, When solving encoder matrix C, the problem of minimum trace of a matrix, needed under conditions of orthogonality constraint due to being related to solving one Utilize Singular-value Decomposition Solution.
6. the distributed image searching method according to claim 5 based on supervised learning, which is characterized in that in step 5, When distributed optimization encoder matrix C and classification matrix W, other than N number of back end, there are one central node, be used into Row updates W the and C overall situation, and Transfer Parameters information between central node and back end, to guarantee the consistency of parameter.
7. the distributed image searching method according to claim 6 based on supervised learning, which is characterized in that in step 6, It is broadcast to all nodes by the query sample new for one, after encoder matrix maps, calculates new samples and node sample Hamming distance between this, the result that sample corresponding to k the smallest distances is searched for as approximation before taking wherein.
CN201910609588.3A 2019-07-08 2019-07-08 Distributed image searching method based on supervised learning Active CN110489585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910609588.3A CN110489585B (en) 2019-07-08 2019-07-08 Distributed image searching method based on supervised learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910609588.3A CN110489585B (en) 2019-07-08 2019-07-08 Distributed image searching method based on supervised learning

Publications (2)

Publication Number Publication Date
CN110489585A true CN110489585A (en) 2019-11-22
CN110489585B CN110489585B (en) 2022-12-02

Family

ID=68546684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910609588.3A Active CN110489585B (en) 2019-07-08 2019-07-08 Distributed image searching method based on supervised learning

Country Status (1)

Country Link
CN (1) CN110489585B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111159443A (en) * 2019-12-31 2020-05-15 深圳云天励飞技术有限公司 Image characteristic value searching method and device and electronic equipment
CN111553418A (en) * 2020-04-28 2020-08-18 腾讯科技(深圳)有限公司 Method and device for detecting neuron reconstruction errors and computer equipment
CN111832637A (en) * 2020-06-30 2020-10-27 南京邮电大学 Distributed deep learning classification method based on alternative direction multiplier method ADMM
CN111881928A (en) * 2020-05-19 2020-11-03 杭州中奥科技有限公司 Coding model training method and device, storage medium and electronic equipment
CN112199520A (en) * 2020-09-19 2021-01-08 复旦大学 Cross-modal Hash retrieval algorithm based on fine-grained similarity matrix
CN112965722A (en) * 2021-03-03 2021-06-15 深圳华大九天科技有限公司 Verilog-A model optimization method, electronic device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150234779A1 (en) * 2014-02-20 2015-08-20 Mitsubishi Electric Research Laboratories, Inc. Method for Solving Quadratic Programs for Convex Sets with Linear Equalities by an Alternating Direction Method of Multipliers with Optimized Step Sizes
CN107315765A (en) * 2017-05-12 2017-11-03 南京邮电大学 A kind of method of the concentrated-distributed proximity search of extensive picture

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150234779A1 (en) * 2014-02-20 2015-08-20 Mitsubishi Electric Research Laboratories, Inc. Method for Solving Quadratic Programs for Convex Sets with Linear Equalities by an Alternating Direction Method of Multipliers with Optimized Step Sizes
CN107315765A (en) * 2017-05-12 2017-11-03 南京邮电大学 A kind of method of the concentrated-distributed proximity search of extensive picture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUE GAO 等: "Weakly Supervised Visual Dictionary Learning by Harnessing Image Attributes", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
谢辉: "搜索引擎中基于内容的图像重排序", 《计算机应用》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111159443A (en) * 2019-12-31 2020-05-15 深圳云天励飞技术有限公司 Image characteristic value searching method and device and electronic equipment
CN111553418A (en) * 2020-04-28 2020-08-18 腾讯科技(深圳)有限公司 Method and device for detecting neuron reconstruction errors and computer equipment
CN111881928A (en) * 2020-05-19 2020-11-03 杭州中奥科技有限公司 Coding model training method and device, storage medium and electronic equipment
CN111881928B (en) * 2020-05-19 2022-07-29 杭州中奥科技有限公司 Coding model training method and device, storage medium and electronic equipment
CN111832637A (en) * 2020-06-30 2020-10-27 南京邮电大学 Distributed deep learning classification method based on alternative direction multiplier method ADMM
CN111832637B (en) * 2020-06-30 2022-08-30 南京邮电大学 Distributed deep learning classification method based on alternating direction multiplier method ADMM
CN112199520A (en) * 2020-09-19 2021-01-08 复旦大学 Cross-modal Hash retrieval algorithm based on fine-grained similarity matrix
CN112199520B (en) * 2020-09-19 2022-07-22 复旦大学 Cross-modal Hash retrieval algorithm based on fine-grained similarity matrix
CN112965722A (en) * 2021-03-03 2021-06-15 深圳华大九天科技有限公司 Verilog-A model optimization method, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN110489585B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN110489585A (en) Distributed image searching method based on supervised learning
Wu et al. Unsupervised Deep Hashing via Binary Latent Factor Models for Large-scale Cross-modal Retrieval.
CN103678431B (en) A kind of recommendation method to be scored based on standard label and project
CN110677284B (en) Heterogeneous network link prediction method based on meta path
CN111325326A (en) Link prediction method based on heterogeneous network representation learning
CN104731962A (en) Method and system for friend recommendation based on similar associations in social network
CN112199532B (en) Zero sample image retrieval method and device based on Hash coding and graph attention machine mechanism
CN111310074B (en) Method and device for optimizing labels of interest points, electronic equipment and computer readable medium
Cheng et al. Bridging multimedia heterogeneity gap via graph representation learning for cross-modal retrieval
CN111259263A (en) Article recommendation method and device, computer equipment and storage medium
CN111832637B (en) Distributed deep learning classification method based on alternating direction multiplier method ADMM
CN112131261B (en) Community query method and device based on community network and computer equipment
CN109919172A (en) A kind of clustering method and device of multi-source heterogeneous data
CN112380299A (en) Relational network construction method, device and storage medium
WO2022188646A1 (en) Graph data processing method and apparatus, and device, storage medium and program product
CN108509651B (en) The distributed approximation searching method with secret protection based on semantic consistency
CN112288154A (en) Block chain service reliability prediction method based on improved neural collaborative filtering
CN115600017A (en) Feature coding model training method and device and media object recommendation method and device
CN115601745A (en) Multi-view three-dimensional object identification method facing application end
CN111459990B (en) Object processing method, system, computer readable storage medium and computer device
Zhang et al. End‐to‐end generation of structural topology for complex architectural layouts with graph neural networks
CN114925210A (en) Knowledge graph construction method, device, medium and equipment
CN114398980A (en) Cross-modal Hash model training method, encoding method, device and electronic equipment
CN115631008A (en) Commodity recommendation method, commodity recommendation device, commodity recommendation equipment and commodity recommendation medium
CN112561599A (en) Click rate prediction method based on attention network learning and fusing domain feature interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant