WO2021012691A1 - 用于检索图像的方法和装置 - Google Patents

用于检索图像的方法和装置 Download PDF

Info

Publication number
WO2021012691A1
WO2021012691A1 PCT/CN2020/080263 CN2020080263W WO2021012691A1 WO 2021012691 A1 WO2021012691 A1 WO 2021012691A1 CN 2020080263 W CN2020080263 W CN 2020080263W WO 2021012691 A1 WO2021012691 A1 WO 2021012691A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
image
matching
degree
target
Prior art date
Application number
PCT/CN2020/080263
Other languages
English (en)
French (fr)
Inventor
郭忠强
Original Assignee
北京京东振世信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东振世信息技术有限公司 filed Critical 北京京东振世信息技术有限公司
Priority to KR1020227003264A priority Critical patent/KR20220018633A/ko
Priority to US17/628,391 priority patent/US20220292132A1/en
Priority to JP2022504246A priority patent/JP2022541832A/ja
Publication of WO2021012691A1 publication Critical patent/WO2021012691A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/56Information retrieval; Database structures therefor; File system structures therefor of still image data having vectorial format
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7335Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7753Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, in particular to methods and devices for retrieving images.
  • image retrieval usually includes text-based image retrieval and content-based image retrieval.
  • text-based image retrieval usually uses text description to describe the characteristics of the image, and matches the text description corresponding to each image in the image library to determine the retrieval result.
  • Content-based image retrieval is usually based on the color, texture, layout and other characteristics of the image, and the corresponding color, texture, layout and other characteristics of each image in the image library are matched to determine the retrieval result.
  • the text description of the image is usually subjective, it will affect the accuracy of the retrieval results.
  • the original image has relatively rich features such as color and texture
  • some existing content-based image retrieval usually requires the user to provide the original image of the object to be retrieved.
  • the features such as color and texture extracted from the image are generally objective description information of the image, and it is difficult to express the semantic information of the image.
  • the embodiments of the present disclosure propose methods and devices for retrieving images.
  • the embodiments of the present disclosure provide a method for retrieving images, the method includes: obtaining a first matrix obtained by feature extraction on a sketch of a target item; obtaining keywords in a keyword set corresponding to the target item The second matrix composed of word vectors of, where the keywords in the keyword set are used to describe the target item; the third matrix set obtained by feature extraction of each image in the image set; for the third matrix in the third matrix set , According to the degree of matching between the first matrix and the third matrix, and the degree of matching between the second matrix and the third matrix, determine the degree of matching between the item represented by the image corresponding to the third matrix and the target item; based on the determined match Degree, select a preset number of images from the image set, and send the selected images.
  • determining the degree of matching between the item represented by the image corresponding to the third matrix and the target item includes : Obtain the first preset weight of the first matrix and obtain the second preset weight of the second matrix; based on the obtained first preset weight and second preset weight, according to the difference between the first matrix and the third matrix The matching degree and the weighted sum of the matching degree between the second matrix and the third matrix determine the degree of matching between the item represented by the image corresponding to the third matrix and the target item.
  • the degree of matching between the first matrix and the third matrix, and the degree of matching between the second matrix and the third matrix are determined by the following steps: the first matrix, the second matrix, and the third matrix are used as targets, respectively.
  • Matrix the target matrix is encoded to obtain the first encoding matrix, the second encoding matrix, and the third encoding matrix, wherein the encoding processing is used to map the target matrix to a binary encoding matrix; determine the first encoding matrix and the third encoding
  • the matching degree of the matrix is taken as the matching degree between the first matrix and the third matrix, and the matching degree between the second coding matrix and the third coding matrix is determined as the matching degree between the second matrix and the third matrix.
  • the encoding process includes: for the row vector S in each row vector of the target matrix, perform the following steps: split each element included in S into C groups, where C represents the number of columns of the encoding matrix; for C For the groups in the group, determine the statistical characteristics of the values of the elements included in the group; in response to determining that the statistical characteristics obtained are greater than the target threshold T, determine that the coding value of the group is 1; in response to determining that the statistical characteristics obtained are less than T, determine The coding value of this group is 0; the coding values corresponding to each group in the C group form a row of the coding matrix to obtain the coding matrix.
  • splitting the elements included in S into groups C includes: determining the quotient of the number of elements included in S and C, and determining the elements included in each group in group C according to the determined quotient Number of.
  • the encoding processing includes: performing the following update processing on each row vector of the target matrix to obtain the updated target matrix: normalizing the row vector, and determining the row according to the normalization result of the row vector
  • Each element contained in the vector corresponds to an update value, where the update value corresponding to each element contained in the row vector is positively correlated with the normalization result corresponding to each element; for the row vector in each row vector of the updated target matrix S, perform the following steps: split each element included in S into groups C, where C represents the number of columns of the coding matrix; for groups in group C, determine the statistical characteristics of the values of the elements included in the group; respond to The determined statistical feature is greater than the target threshold T, the encoding value of the group is determined to be 1; in response to the determination that the statistical feature is less than T, the encoding value of the group is determined to be 0; each group in the C group corresponds to the encoding value A row of the coding matrix is formed to obtain the coding matrix.
  • determining the corresponding update value of each element contained in the row vector according to the normalization result of the row vector includes: determining the content contained in the row vector according to the normalization result of the row vector and the preset adjustment parameter ⁇ Each element corresponds to an update value, wherein the update value of each element contained in the row vector is positively correlated with ⁇ .
  • determining the update value corresponding to each element contained in the row vector includes: determining the element in each element contained in the row vector The square root of the product of the corresponding normalization result and ⁇ is used as the update value corresponding to the element.
  • the first matrix is obtained by the following steps: splitting the sketch into at least two sub-images; using a pre-trained convolutional neural network to perform feature extraction on the at least two sub-images respectively to obtain at least two sub-images corresponding respectively Eigenvector: Determine a matrix composed of eigenvectors corresponding to at least two sub-images as the first matrix.
  • the convolutional neural network is trained through the following steps: obtaining a sketch set, and obtaining a matching image set corresponding to each sketch in the sketch set, wherein the sketch and the matching image in the corresponding matching image set are used to present the same Items; select sketches from the sketch set, and perform the following training steps: use the initial model to perform feature extraction on the selected sketches and each image in the target image set to obtain the sketch and the output matrix corresponding to each image in the target image set; determine The matching degree between the output matrix corresponding to the sketch of and the output matrix corresponding to each image in the target image set, and selecting the corresponding matching degree greater than the preset threshold image; according to the selected image and the matching image set corresponding to the input sketch, Determine the recall rate and/or precision rate corresponding to the selected image, and determine whether the training of the initial model is completed according to the determined recall rate and/or precision rate; in response to determining that the training of the initial model is completed, determine the initial training completion
  • the model is used as a convolutional neural network; in response to
  • an embodiment of the present disclosure provides a device for retrieving images, the device comprising: an acquisition unit configured to acquire a first matrix obtained by feature extraction on a sketch of a target item; and the acquisition unit is further It is configured to obtain a second matrix composed of word vectors of keywords in the keyword set corresponding to the target item, wherein the keywords in the keyword set are used to describe the target item; the obtaining unit is further configured to obtain each image in the image set The third matrix set obtained by feature extraction respectively; the determining unit is configured to, for the third matrix in the third matrix set, according to the matching degree between the first matrix and the third matrix, and the difference between the second matrix and the third matrix The degree of matching determines the degree of matching between the item presented by the image corresponding to the third matrix and the target item; the sending unit is configured to select a preset number of images from the image set based on the determined degree of matching, and send the selected image .
  • the determining unit is further configured to: obtain a first preset weight of the first matrix, and obtain a second preset weight of the second matrix; based on the obtained first preset weight and the second preset weight The weight is used to determine the degree of matching between the object represented by the image corresponding to the third matrix and the target object according to the weighted sum of the degree of matching between the first matrix and the third matrix and the degree of matching between the second matrix and the third matrix.
  • the degree of matching between the first matrix and the third matrix, and the degree of matching between the second matrix and the third matrix are determined by the following steps: the first matrix, the second matrix, and the third matrix are used as targets, respectively.
  • Matrix the target matrix is encoded to obtain the first encoding matrix, the second encoding matrix, and the third encoding matrix, wherein the encoding processing is used to map the target matrix to a binary encoding matrix; determine the first encoding matrix and the third encoding
  • the matching degree of the matrix is taken as the matching degree between the first matrix and the third matrix, and the matching degree between the second coding matrix and the third coding matrix is determined as the matching degree between the second matrix and the third matrix.
  • the encoding process includes: for the row vector S in each row vector of the target matrix, perform the following steps: split each element included in S into C groups, where C represents the number of columns of the encoding matrix; for C For the groups in the group, determine the statistical characteristics of the values of the elements included in the group; in response to determining that the statistical characteristics obtained are greater than the target threshold T, determine that the coding value of the group is 1; in response to determining that the statistical characteristics obtained are less than T, determine The coding value of this group is 0; the coding values corresponding to each group in the C group form a row of the coding matrix to obtain the coding matrix.
  • splitting the elements included in S into groups C includes: determining the quotient of the number of elements included in S and C, and determining the elements included in each group in group C according to the determined quotient Number of.
  • the encoding processing includes: performing the following update processing on each row vector of the target matrix to obtain the updated target matrix: normalizing the row vector, and determining the row according to the normalization result of the row vector
  • Each element contained in the vector corresponds to an update value, where the update value corresponding to each element contained in the row vector is positively correlated with the normalization result corresponding to each element; for the row vector in each row vector of the updated target matrix S, perform the following steps: split each element included in S into groups C, where C represents the number of columns of the coding matrix; for groups in group C, determine the statistical characteristics of the values of the elements included in the group; respond to The determined statistical feature is greater than the target threshold T, the encoding value of the group is determined to be 1; in response to the determination that the statistical feature is less than T, the encoding value of the group is determined to be 0; each group in the C group corresponds to the encoding value A row of the coding matrix is formed to obtain the coding matrix.
  • determining the corresponding update value of each element contained in the row vector according to the normalization result of the row vector includes: determining the content contained in the row vector according to the normalization result of the row vector and the preset adjustment parameter ⁇ Each element corresponds to an update value, wherein the update value of each element contained in the row vector is positively correlated with ⁇ .
  • determining the update value corresponding to each element contained in the row vector includes: determining the element in each element contained in the row vector The square root of the product of the corresponding normalization result and ⁇ is used as the update value corresponding to the element.
  • the first matrix is obtained by the following steps: splitting the sketch into at least two sub-images; using a pre-trained convolutional neural network to perform feature extraction on the at least two sub-images respectively to obtain at least two sub-images corresponding respectively Eigenvector: Determine a matrix composed of eigenvectors corresponding to at least two sub-images as the first matrix.
  • the convolutional neural network is trained through the following steps: obtaining a sketch set, and obtaining a matching image set corresponding to each sketch in the sketch set, wherein the sketch and the matching image in the corresponding matching image set are used to present the same Items; select sketches from the sketch set, and perform the following training steps: use the initial model to perform feature extraction on the selected sketches and each image in the target image set to obtain the sketch and the output matrix corresponding to each image in the target image set; determine The matching degree between the output matrix corresponding to the sketch of and the output matrix corresponding to each image in the target image set, and selecting the corresponding matching degree greater than the preset threshold image; according to the selected image and the matching image set corresponding to the input sketch, Determine the recall rate and/or precision rate corresponding to the selected image, and determine whether the training of the initial model is completed according to the determined recall rate and/or precision rate; in response to determining that the training of the initial model is completed, determine the initial training completion
  • the model is used as a convolutional neural network; in response to
  • an embodiment of the present disclosure provides an electronic device, the electronic device includes: one or more processors; a storage device for storing one or more programs; when one or more programs are used by one or more Execution by two processors, so that one or more processors implement the method described in any implementation manner of the first aspect.
  • an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored, and when the computer program is executed by a processor, the method as described in any implementation manner in the first aspect is implemented.
  • the method and device for retrieving images provided by the embodiments of the present disclosure match each image in the image set according to the sketch of the item and the corresponding keyword, and determine the retrieval result according to the matching result, so that when the user cannot provide the item When searching the original image, you can use the sketch of the article to realize the search, and because the keyword of the article is used at the same time, the semantic information of the image is integrated in the search process, which helps to reduce the false detection rate and missed detection of the image Rate, thereby improving the accuracy of search results.
  • FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present disclosure can be applied
  • Fig. 2 is a flowchart of an embodiment of a method for retrieving images according to the present disclosure
  • Fig. 3 is a schematic diagram of an application scenario of the method for retrieving images according to an embodiment of the present disclosure
  • Fig. 5 is a schematic structural diagram of an embodiment of an apparatus for retrieving images according to the present disclosure
  • Fig. 6 is a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
  • FIG. 1 shows an exemplary architecture 100 to which an embodiment of the method for retrieving images or the apparatus for retrieving images of the present disclosure can be applied.
  • the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105.
  • the network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105.
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables.
  • the terminal devices 101, 102, 103 interact with the server 105 through the network 104 to receive or send messages and so on.
  • Various client applications may be installed on the terminal devices 101, 102, 103. For example, browser applications, search applications, image processing applications, and so on.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, 103 can be various electronic devices, including but not limited to smart phones, tablet computers, e-book readers, laptop computers, desktop computers, and so on.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented as multiple software or software modules (for example, multiple software or software modules used to provide distributed services), or as a single software or software module. There is no specific limitation here.
  • the server 105 may be a server that provides various services, for example, a back-end server that provides back-end support for client applications installed on the terminal devices 101, 102, and 103.
  • the server 105 can receive the sketches and keyword sets of the target items sent by the terminal devices 101, 102, 103, and process the sketches and the keyword sets of the target items respectively, and then select the sketches and keyword sets of the target items from the image set according to the processing results.
  • the keyword set matches the image, and the selected image is sent to the terminal devices 101, 102, 103.
  • sketches and keyword sets of the target items mentioned above can also be directly stored locally on the server 105, and the server 105 can directly extract and process the sketches and keyword sets of the target items stored locally.
  • the method for retrieving images provided by the embodiments of the present disclosure is generally executed by the server 105, and accordingly, the device for retrieving images is generally set in the server 105.
  • image processing applications can also be installed in the terminal devices 101, 102, and 103.
  • the terminal devices 101, 102, and 103 can also process face images based on image processing applications.
  • the image method can also be executed by the terminal devices 101, 102, 103, and correspondingly, the device for retrieving images can also be provided in the terminal devices 101, 102, 103.
  • the exemplary system architecture 100 may not include the server 105 and the network 104.
  • the server 105 may be hardware or software.
  • the server When the server is hardware, it can be implemented as a distributed server cluster composed of multiple servers, or as a single server.
  • the server 105 When the server 105 is software, it can be implemented as multiple software or software modules (for example, multiple software or software modules for providing distributed services), or as a single software or software module. There is no specific limitation here.
  • terminal devices, networks, and servers in FIG. 1 are merely illustrative. According to implementation needs, there can be any number of terminal devices, networks and servers.
  • FIG. 2 shows a flow 200 of an embodiment of the method for retrieving images according to the present disclosure.
  • the method for retrieving images includes the following steps:
  • Step 201 Obtain a first matrix obtained by feature extraction on the sketch of the target item.
  • the target item may be the user's retrieval target, that is, the item that the user expects to be found in the retrieved image.
  • the sketch of the target item can be used to initially express the design or physical concept of the item.
  • the sketch of the target article may present the structure and size of the article, as well as the relative positional relationship of the various parts of the article.
  • the user may draw the sketch of the target item, or the user may select the sketch of the target item from some existing sketch libraries (such as Sketchy image library).
  • some existing sketch libraries such as Sketchy image library
  • extracting features of the sketch may refer to extracting some image information of the sketch.
  • the sketch can be analyzed and processed to determine whether each pixel of the sketch can express a certain feature of the sketch.
  • various existing image feature extraction methods can be used to extract features of the sketch.
  • the feature extraction method based on SURF Scale Invariant Feature Transform
  • SURF Scale Invariant Feature Transform
  • a feature extraction method based on deep learning can be used to extract features of a sketch of a target item.
  • the feature extraction result of the sketch may be a feature vector.
  • the extracted feature vector can be regarded as the aforementioned first matrix.
  • the sketch may be split into at least two sub-images first. Then, a pre-trained convolutional neural network is used to perform feature extraction on at least two sub-images, and the feature vectors corresponding to the at least two sub-images can be obtained, and then the matrix composed of feature vectors corresponding to the at least two sub-images can be regarded as the above-mentioned first A matrix.
  • the split method of the sketch can be flexibly selected. For example, you can use the geometric center of the sketch as the center point, and split the sketch into four sub-images evenly from the horizontal and vertical directions.
  • the manner in which the feature vectors corresponding to the obtained at least two sub-images respectively form a matrix can be preset by a technician.
  • the first matrix is obtained by arranging in rows in a specified order.
  • the subsequent matching process can be targeted to match the image area of the corresponding position, that is, the matching process has more accurate location information, which helps increase the accuracy of the matching result, and then increases the retrieval The accuracy of the result.
  • the convolutional neural network can be various types of pre-trained neural networks (such as deep learning models) for extracting image features.
  • a convolutional neural network can be composed of several convolutional layers, pooling layers, and fully connected layers.
  • the convolutional layer is used to perform convolution operations on the image of the input convolutional layer to extract features
  • the pooling layer is used to compress the output result of the convolutional layer to extract main features
  • the fully connected layer can extract the extracted image
  • the local features of, are integrated to map the distributed feature representation learned by each layer before the fully connected layer to the sample label space.
  • the convolutional neural network used to extract the features of the image can be trained through the following steps:
  • Step one is to obtain a sketch set, and obtain a matching image set corresponding to each sketch in the sketch set.
  • various image processing applications can be used to generate a large number of sketches to form a sketch set, or a sketch set can be obtained from a third-party data platform.
  • the matching images and sketches in the matching image set corresponding to the sketch set can be used to present the same item.
  • the matching image in the matching image set corresponding to the sketch can be designated by a technician, or can be obtained from a third-party data platform.
  • Step two select sketches from the sketch set, and perform the following training steps 1 to 3:
  • the way of selecting sketches from the sketch set can be flexibly set according to different application scenarios. For example, you can randomly select a preset number of sketches from a sketch set. For another example, a preset number of unselected sketches can be selected from the sketch set.
  • the first training step is to use the initial model to perform feature extraction on the selected sketch and each image in the target image set to obtain the output matrix corresponding to each image in the sketch and the target image set.
  • the initial model can be various types of untrained or untrained artificial neural networks.
  • the initial model can also be a model obtained by combining a variety of untrained or untrained artificial neural networks.
  • the technician can construct the initial model according to actual application requirements (such as the number of convolutional layers, the size of the convolution kernel, etc.).
  • the target image set can be preset by a technician.
  • the target image set may be the above-mentioned image set.
  • the second training step is to determine the matching degree between the output matrix corresponding to the obtained sketch and the output matrix corresponding to each image in the target image set, and to select the image with the corresponding matching degree greater than a preset threshold.
  • the calculation method of the matching degree of the two output matrices can adopt various existing matrix matching algorithms.
  • the two matrices can be flattened into vectors in a preset manner, and then the calculated similarity between the two vectors is used as the matching degree of the two output matrices.
  • the preset threshold can be preset by a technician according to actual application requirements.
  • Training step three according to the selected image and the matching image set corresponding to the input sketch, determine the recall and/or precision corresponding to the selected image, and determine the initial recall according to the determined recall and/or precision Whether the model has been trained.
  • the recall rate can be used to characterize the degree of detection of the desired image.
  • the recall rate can be represented by the ratio of the number of images included in the intersection of the selected image and the matching image set to the total number of images included in the target image set that present the same item as the input sketch.
  • the precision can be used to characterize the percentage of the retrieved desired images in all retrieved images.
  • the precision can be represented by the ratio of the number of images included in the intersection of the selected image and the matching image set to the total number of images included in the matching image set.
  • the value of the preset loss function may be determined, and whether the training of the initial model is completed is determined according to the determined value of the loss function.
  • the calculation method of the loss function can be preset by a technician.
  • the preset loss function may be used to characterize the degree of difference between the determined recall rate and/or precision rate and the preset recall rate and/or precision rate. At this time, it can be determined whether the training of the initial model is completed according to whether the value of the determined loss function is less than the preset loss threshold.
  • the trained initial model can be determined as the convolutional neural network for extracting image features.
  • the parameters of the initial model can be adjusted according to the determined recall rate and/or precision rate, and the adjusted initial model is determined as For the initial model, re-select the sketches from the sketch set, and continue to perform the training steps 1 to 3 above.
  • gradient descent and back propagation algorithms can be used to adjust the parameters of each layer of the initial model according to the value of the loss function, so that the recall and/or precision of the adjusted initial model is as high as possible.
  • other electronic devices may perform feature extraction on the sketch of the target article in advance to obtain the first matrix.
  • the execution subject the server 105 shown in FIG. 1
  • the first matrix can also be obtained by feature extraction of the sketch of the target article in advance by the execution subject.
  • the above-mentioned execution subject may directly obtain the first matrix locally.
  • Step 202 Obtain a second matrix composed of word vectors of keywords in the keyword set corresponding to the target item.
  • the keywords in the keyword set can be used to describe the target item.
  • the keywords in the keyword set can be preset by the user.
  • the word vectors of the keywords in the keyword set can be determined using various existing methods for generating word vectors (such as Word2Vec, FastText, etc.).
  • the manner in which the word vectors of each keyword in the keyword set compose the second matrix can be preset by the technician.
  • the word vectors corresponding to each keyword can be arranged in rows in a preset order to obtain the above-mentioned second matrix.
  • the word vector of each keyword in the keyword set may be generated in advance by another electronic device, and then the second matrix is obtained.
  • the above-mentioned execution subject may obtain the second matrix from other electronic devices. It is understandable that the above-mentioned execution subject may also pre-generate the word vector of each keyword in the keyword set, and then obtain the second matrix. At this time, the above-mentioned execution subject may directly obtain the second matrix locally.
  • the corresponding relationship between the keyword and the word vector can be stored, so that when it is used next time, the word vector corresponding to the keyword can be used directly, which is helpful for improvement Image retrieval speed.
  • the word vector is obtained through a neural network (such as Word2Vec, etc.)
  • the neural network can be retrained with new keywords and the corresponding word vector after a certain time interval to update the neural network.
  • Step 203 Obtain a third matrix set obtained by performing feature extraction on each image in the image set.
  • various existing image feature extraction methods can be used to perform feature extraction on each image in the image set.
  • the feature extraction method based on SURF Scale Invariant Feature Transform
  • a feature extraction method based on deep learning can be used to extract features of each image in an image set.
  • the same convolutional neural network may be used to perform feature extraction on the sketch of the target item and each image in the image set to obtain the first matrix corresponding to the sketch of the target item and the third matrix corresponding to each image in the image set.
  • the images included in the image set are generally massive, and the update frequency of the image set is generally low. Therefore, it is possible to perform feature extraction on the images in the image collection in advance to obtain the third matrix corresponding to each image, and then store the correspondence between each image and the corresponding third matrix, so that the first corresponding to each image can be used directly later.
  • the three matrices do not need to process each image again to obtain the third matrix corresponding to each image, which helps to improve the image retrieval speed.
  • the image set is updated, the corresponding relationship between the updated part and the corresponding third matrix may be further stored.
  • the third matrix is obtained by using a convolutional neural network, when the image set is updated, the update part can also be used to further train the convolutional neural network to update the convolutional neural network.
  • Step 204 For the third matrix in the third matrix set, according to the degree of matching between the first matrix and the third matrix, and the degree of matching between the second matrix and the third matrix, it is determined that the items represented by the image corresponding to the third matrix are The matching degree of the target item.
  • various existing matrix matching algorithms can be used to calculate the matching degree between the first matrix and the third matrix, and the matching degree between the second matrix and the third matrix. Furthermore, based on the obtained two matching degrees, the matching degree between the object presented by the image corresponding to the third matrix and the target object can be comprehensively determined. Specifically, the method for comprehensively determining the matching degree between the object presented by the image corresponding to the third matrix and the target object based on the obtained two matching degrees can be flexibly set.
  • the maximum value of the two may be determined or the average value of the two may be determined as the degree of matching between the object presented in the image corresponding to the third matrix and the target object.
  • the first preset weight of the first matrix can be obtained, and the second preset weight of the second matrix can be obtained. Then, based on the obtained first preset weight and second preset weight, the third matrix is determined according to the weighted sum of the matching degree between the first matrix and the third matrix and the matching degree between the second matrix and the third matrix The degree of match between the items presented by the corresponding image and the target item.
  • the first preset weight and the second preset weight may be preset by a technician, or the user may input the first preset weight and the second preset weight.
  • the value range of the first preset weight and the second weight may be [0-1], and the sum of the first preset weight and the second preset weight is equal to 1.
  • the weighted sum of the matching degree between the first matrix and the third matrix and the matching degree between the second matrix and the third matrix may be determined as the matching degree between the item presented in the image corresponding to the third matrix and the target item.
  • the value of the preset function corresponding to the weighted sum may be used as the degree of matching between the item presented in the image corresponding to the third matrix and the target item.
  • the preset function can be preset by a technician.
  • the user sets the first preset weight to 0 or the second preset weight to 0, it can be realized only based on the sketch of the target item or the keyword set of the target item. search image. That is, users can flexibly set different retrieval methods according to actual needs to control the impact of the sketches of the target item and the keywords in the keyword set of the target item on the retrieval results, which helps to improve the accuracy of the retrieval results.
  • Step 205 based on the determined matching degree, select a preset number of images from the image set, and send the selected images.
  • the preset number can be preset by a technician. After obtaining the respective matching degrees of each image in the image set, the way of selecting images from the image set can be flexibly set.
  • a preset number of images can be selected from the image set in the descending order of the corresponding matching degree.
  • a subset of images may be selected from the image set according to a preset matching degree threshold, and then a preset number of images may be randomly selected from the image subset.
  • the image selected from the image set can be sent to other electronic devices.
  • it can be sent to a user terminal connected to the above-mentioned execution subject (the terminal devices 101, 102, 103 shown in FIG. 1).
  • the corresponding relationship between the sketch of the target item and the image selected from the image set can also be stored. Therefore, when the sketch of the target item is acquired again, the images in the image collection that match the sketch of the target item can be quickly acquired according to the stored correspondence relationship.
  • FIG. 3 is a schematic diagram 300 of an application scenario of the method for retrieving images according to this embodiment.
  • the above-mentioned executive body can obtain the sketch 301 input by the user through the terminal device 308 used by the user in advance, and then use the geometric center of the sketch 301 as the center point, from the horizontal and the vertical directions, the sketch 301 It is divided into sub-image 3011, sub-image 3012, sub-image 3013, and sub-image 3014.
  • the obtained four sub-images can be respectively input to a pre-trained convolutional neural network to obtain feature vectors corresponding to the four sub-images respectively, and the first matrix 302 is composed of the feature vectors respectively corresponding to the four sub-images.
  • the above-mentioned execution subject may obtain the keyword set 303 input by the user through the terminal device 308 in advance.
  • the keyword set 303 includes four keywords: "water cup”, “small capacity”, “without lid” and “with handle”.
  • the pre-trained Word2Vec model can be used to generate word vectors corresponding to the four keywords, and then the second matrix 304 composed of word vectors corresponding to the four keywords can be obtained.
  • the above-mentioned execution subject may pre-process each image in the image set 305 to obtain the third matrix corresponding to each image, and obtain the third matrix set 306.
  • the processing procedure for the images in the image set 305 is similar to the processing procedure for the sketch 301 described above. Take an image in the image set 305 as an example for description: taking the geometric center of the image as the center point, split the image into four sub-images from the horizontal and vertical directions. After that, the obtained four sub-images can be respectively input to a pre-trained convolutional neural network to obtain feature vectors corresponding to the four sub-images respectively, and the feature vectors corresponding to the four sub-images respectively form a third matrix corresponding to the image.
  • the comprehensive matching degree corresponding to each third matrix in the third matrix set 306 can be determined.
  • the degree of matching between the third matrix and the first matrix 302 can be determined as the first degree of matching, and at the same time the degree of matching between the third matrix and the second matrix 304 can be determined.
  • the matching degree is regarded as the second matching degree.
  • the weighted sum of the first matching degree and the second matching degree is determined as the comprehensive matching degree corresponding to the third matrix.
  • a preset number of images can be selected from the image set 305 as target images according to the corresponding matching degree in descending order to obtain the target image set 307, and the target image set 307 can be pushed to the terminal device 308 used by the user. To display.
  • the method for retrieving images provided by the above-mentioned embodiments of the present disclosure realizes retrieval based on sketches and keywords of items, thereby avoiding inability to perform retrieval or low accuracy of retrieval results caused by the user's inability to provide the original image of the item Case.
  • the accuracy of the search results is ensured.
  • FIG. 4 shows a flow 400 of still another embodiment of a method for retrieving images.
  • the process 400 of the method for retrieving images includes the following steps:
  • Step 401 Obtain a first matrix obtained by feature extraction on the sketch of the target item.
  • Step 402 Obtain a second matrix composed of word vectors of keywords in the keyword set corresponding to the target item.
  • Step 403 Obtain a third matrix set obtained by performing feature extraction on each image in the image set.
  • Step 404 For the third matrix in the third matrix set, the first matrix, the second matrix, and the third matrix are respectively used as target matrices to perform encoding processing on the target matrix to obtain the first encoding matrix, second encoding matrix, and The third coding matrix.
  • the encoding process can be used to map the target matrix to a binary encoding matrix.
  • the binary coding matrix may refer to a matrix whose elements are "0" and "1".
  • the encoding process may include: first transforming the target matrix into a matrix of preset dimensions, and then normalizing each element in the matrix so that the value range of each element included in the matrix is [0-1] . After that, the encoding value of the element whose value is greater than the preset standard value may be set to "1", and the encoding value of the element whose value is not greater than the preset standard value may be set to "0".
  • the preset dimensions and preset standard values can be set in advance by the technicians.
  • some existing data processing applications can be used to convert the target matrix into a matrix with a preset dimension, or according to the preset dimensions, a pooling window can be set, and the target can be pooled (such as average pooling) to change the target The matrix is converted to a matrix of preset dimensions.
  • the dimensions of the correspondingly generated first, second, and third encoding matrices can be controlled, and the first encoding matrix and the second encoding matrix
  • the third coding matrix is a binary coding matrix, which can reduce the difficulty of subsequent matrix matching and greatly improve the speed of matrix matching.
  • the encoding process may include:
  • Step 1 Split each element included in S into C groups.
  • C can represent the number of columns of the coding matrix.
  • C can be preset by a technician.
  • the number of elements contained in each group obtained by splitting can also be preset by a technician.
  • the quotient of the number of elements included in S and C may be determined first, and then the number of elements included in each group in group C may be determined according to the determined quotient.
  • the number of elements included in as many groups as possible can be equal to the result of rounding up or rounding down the determined quotient.
  • Step 2 For the groups in the C group, determine the statistical characteristics of the values of the elements included in the group.
  • the statistical characteristics include but are not limited to one of sum, expectation, variance, maximum value, and standard deviation.
  • the specific statistical feature can be selected by the technician according to different application scenarios.
  • Step 3 In response to determining that the obtained statistical feature is greater than the target threshold T, determine that the code value of the group is 1; in response to determining that the obtained statistical feature is less than T, determine that the code value of the group is 0.
  • the target threshold T can be set in advance by a technician.
  • D represents the number of elements that S can include, and S i can represent the value of the i-th element of S.
  • a row of the coding matrix is formed by the coding values corresponding to each group in the C group to obtain the coding matrix.
  • the encoding process may include:
  • the normalization processing may specifically include: first determining the sum of the values of the elements included in the row vector, and then determining the quotient of each element contained in the row vector and the determined sum as the normalization result corresponding to each element.
  • the normalized result corresponding to each element can be directly used as the update value corresponding to each element.
  • the update value corresponding to each element contained in the row vector may be determined according to the normalization result of the row vector and the preset adjustment parameter ⁇ . Among them, the update value corresponding to each element contained in the row vector can be positively correlated with ⁇ .
  • the product of the normalization result corresponding to the element and ⁇ can be determined as the update value corresponding to the element.
  • the square root of the product of the normalization result corresponding to the element and ⁇ can be determined as the update value corresponding to the element.
  • each element included in S into C groups, where C represents the number of columns of the coding matrix; for groups in the C group , Determine the statistical characteristics of the values of the elements included in the group; in response to determining that the statistical characteristics obtained are greater than the target threshold T, determine that the code value of the group is 1; in response to determining that the statistical characteristics obtained are less than T, determine the code of the group The value is 0;
  • a row of the coding matrix can be formed by the coding values corresponding to each group in the C group to obtain the coding matrix.
  • the Noise improves the universality and stability of the first matrix, the second matrix, and the third matrix, thereby ensuring the accuracy of the subsequent matrix matching process.
  • Step 405 Determine the degree of matching between the first encoding matrix and the third encoding matrix as the degree of matching between the first matrix and the third matrix, and determine the degree of matching between the second encoding matrix and the third encoding matrix as the degree of matching between the second matrix and the third matrix.
  • Step 406 based on the determined matching degree, select a preset number of images from the image set, and send the selected images.
  • step 205 For the specific execution process of this step, reference may be made to the related description of step 205 in the embodiment corresponding to FIG. 2, which is not repeated here.
  • the specific composition of the target matrix (including the first matrix, the second matrix, and the third matrix in the third matrix set) in the present disclosure can be flexibly set.
  • the target matrix when it is a vector, it can be a row vector or a column vector.
  • each vector can be formed into a target matrix by rows, or each vector can be formed into a target matrix by columns.
  • the rows of a matrix are the columns of the transposed matrix of the matrix. Therefore, "row” in the present disclosure can also be replaced with “column”, and the corresponding "column” can also be replaced with "row”.
  • the process 400 of the method for retrieving images in this embodiment highlights that in the matrix matching process, the matrix can be encoded to control the use of The dimension and calculation amount of the matrix for matching calculation can reduce the difficulty and calculation amount of the matrix matching process, increase the matching speed, and improve the image retrieval speed.
  • the present disclosure provides an embodiment of an apparatus for retrieving images.
  • the apparatus embodiment corresponds to the method embodiment shown in FIG. It can be used in various electronic devices.
  • the apparatus 500 for retrieving images includes an acquiring unit 501, a determining unit 502, and a sending unit 503.
  • the acquiring unit 501 is configured to acquire a first matrix obtained by feature extraction of a sketch of the target item; the acquiring unit 501 is further configured to acquire a second matrix composed of word vectors of keywords in the keyword set corresponding to the target item, Among them, the keywords in the keyword set are used to describe the target item; the acquiring unit 501 is further configured to acquire a third matrix set obtained by feature extraction of each image in the image set; the determining unit 502 is configured to focus on the third matrix According to the matching degree between the first matrix and the third matrix, and the matching degree between the second matrix and the third matrix, determine the matching degree between the item presented by the image corresponding to the third matrix and the target item; The unit 503 is configured to select a preset number of images from the image set based on the determined matching degree, and send the selected images.
  • the specific processing of the acquiring unit 501, the determining unit 502, and the sending unit 503 and the technical effects brought by them can refer to steps 201 and 201 in the corresponding embodiment in FIG. 2 respectively.
  • the related description of step 202, step 203, step 204 and step 205 will not be repeated here.
  • the determining unit 502 is further configured to: obtain a first preset weight of the first matrix, and obtain a second preset weight of the second matrix; based on the obtained first matrix The preset weights and the second preset weights, according to the matching degree of the first matrix and the third matrix and the weighted sum of the matching degree of the second matrix and the third matrix, determine the items presented by the image corresponding to the third matrix and The matching degree of the target item.
  • the degree of matching between the first matrix and the third matrix, and the degree of matching between the second matrix and the third matrix are determined by the following steps:
  • the third matrix is used as the target matrix, and the target matrix is coded to obtain the first coding matrix, the second coding matrix, and the third coding matrix.
  • the coding processing is used to map the target matrix into a binary coding matrix;
  • the matching degree between a coding matrix and the third coding matrix is taken as the matching degree between the first matrix and the third matrix, and the matching degree between the second coding matrix and the third coding matrix is determined as the matching degree between the second matrix and the third matrix .
  • the encoding process includes: for the row vector S in each row vector of the target matrix, perform the following steps: split each element included in S into C groups, where C represents encoding The number of columns of the matrix; for a group in group C, determine the statistical characteristics of the values of the elements included in the group; in response to determining that the statistical characteristics obtained are greater than the target threshold T, determine that the encoding value of the group is 1; in response to determining The statistical feature of is less than T, the coding value of the group is determined to be 0; the coding values corresponding to each group in the C group form a row of the coding matrix to obtain the coding matrix.
  • splitting the elements included in S into groups C includes: determining the quotient of the number of elements included in S and C, and determining the group C according to the determined quotient The number of elements included in each group in.
  • the encoding processing includes: performing the following update processing on each row vector of the target matrix to obtain the updated target matrix: normalizing the row vector, and processing according to the row vector
  • the normalization result is to determine the update value corresponding to each element contained in the row vector, where the update value corresponding to each element contained in the row vector is positively correlated with the normalization result corresponding to each element; for the updated target matrix
  • For each row vector S in each row vector perform the following steps: split each element included in S into C groups, where C represents the number of columns of the coding matrix; for groups in group C, determine the selection of the elements included in the group
  • the code value of the group is determined to be 1; in response to determining that the statistical feature obtained is less than T, the code value of the group is determined to be 0;
  • the coding values corresponding to each group form a row of the coding matrix to obtain the coding matrix.
  • determining the update value corresponding to each element contained in the row vector according to the normalization result of the row vector includes: according to the normalization result of the row vector and preset adjustment parameters ⁇ , determine the update value corresponding to each element contained in the row vector, wherein the update value corresponding to each element contained in the row vector is positively correlated with ⁇ .
  • the update value corresponding to each element contained in the row vector is determined, including: for each element contained in the row vector Determine the square root of the product of the normalized result and ⁇ corresponding to the element as the updated value corresponding to the element.
  • the first matrix is obtained by the following steps: splitting the sketch into at least two sub-images; using a pre-trained convolutional neural network to perform feature extraction on the at least two sub-images to obtain Feature vectors corresponding to at least two sub-images respectively; determining a matrix composed of feature vectors respectively corresponding to at least two sub-images as the first matrix.
  • the convolutional neural network is trained through the following steps: obtaining a sketch set, and obtaining a matching image set corresponding to each sketch in the sketch set, where the sketch and the corresponding matching image set
  • the matching images of are used to present the same items; sketches are selected from the sketch set, and the following training steps are performed: use the initial model to perform feature extraction on the selected sketches and each image in the target image set to obtain the sketch and each image in the target image set.
  • Corresponding output matrix determine the matching degree between the output matrix corresponding to the obtained sketch and the output matrix corresponding to each image in the target image set, and select the image with the corresponding matching degree greater than the preset threshold; according to the selected image and input
  • the matching image set corresponding to the sketch is determined, the recall and/or precision corresponding to the selected image is determined, and the training of the initial model is determined according to the determined recall and/or precision; in response to determining the initial model training Complete, determine the completed initial model as a convolutional neural network; in response to determining that the initial model has not been trained, adjust the parameters of the initial model according to the determined recall and/or precision, and determine the adjusted initial model
  • For the initial model select a new sketch from the sketch set, and continue to perform the above training steps.
  • the device provided by the above-mentioned embodiment of the present disclosure obtains the first matrix obtained by feature extraction on the sketch of the target item through the obtaining unit; obtains the second matrix composed of the word vectors of the keywords in the keyword set corresponding to the target item, wherein, The keywords in the keyword set are used to describe the target item; the third matrix set obtained by feature extraction of each image in the image set is obtained; the determination unit for the third matrix in the third matrix set, according to the first matrix and the third matrix set The matching degree of the matrix, and the matching degree between the second matrix and the third matrix, determine the matching degree between the item presented by the image corresponding to the third matrix and the target item; the sending unit selects a preset from the image set based on the determined matching degree Setting a number of images and sending the selected images can avoid the inability to search or the low accuracy of the search result caused by the user's inability to provide the original image of the item. At the same time, by combining features such as the size and structure of the items provided by the sketch, and the semantic features of
  • FIG. 6 shows a schematic structural diagram of an electronic device (for example, the server in FIG. 1) 600 suitable for implementing embodiments of the present disclosure.
  • the terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals ( For example, mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs and desktop computers.
  • the server shown in FIG. 6 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which can be loaded into a random access device according to a program stored in a read-only memory (ROM) 602 or from a storage device 608.
  • the program in the memory (RAM) 603 executes various appropriate actions and processing.
  • the RAM 603 also stores various programs and data required for the operation of the electronic device 600.
  • the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604.
  • the following devices can be connected to the I/O interface 605: including input devices 606 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 607 such as a device; a storage device 608 such as a magnetic tape and a hard disk; and a communication device 609.
  • the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 6 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices. Each block shown in FIG. 6 can represent one device, or can represent multiple devices as needed.
  • the process described above with reference to the flowchart can be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602.
  • the processing device 601 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium described in the embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains the first matrix obtained by feature extraction on the sketch of the target item; obtains the target item The second matrix composed of the word vectors of the keywords in the corresponding keyword set, where the keywords in the keyword set are used to describe the target item; the third matrix set obtained by feature extraction of each image in the image set; The third matrix in the third matrix set, according to the matching degree between the first matrix and the third matrix, and the matching degree between the second matrix and the third matrix, determine the relationship between the item presented by the image corresponding to the third matrix and the target item Matching degree: Based on the determined matching degree, select a preset number of images from the image set, and send the selected images.
  • the computer program code for performing the operations of the embodiments of the present disclosure can be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages such as Java, Smalltalk, C++, It also includes conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to Connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure can be implemented in software or hardware.
  • the described unit may also be provided in the processor.
  • a processor includes an acquiring unit, a determining unit, and a sending unit.
  • the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • the sending unit can also be described as "based on the determined matching degree, select a preset number of images from the image set, and send The unit of the selected image".

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Library & Information Science (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

本公开的实施例公开了用于检索图像的方法。该方法的一具体实施方式包括:获取对目标物品的草图进行特征提取得到的第一矩阵;获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵,其中,关键词集中的关键词用于描述目标物品;获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集;对于第三矩阵集中的第三矩阵,根据第一矩阵与该第三矩阵的匹配度,以及第二矩阵与该第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度;基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像。该实施方式实现了基于物品的草图和关键词进行检索。

Description

用于检索图像的方法和装置
本专利申请要求于2019年7月23日提交的、申请号为201910665039.8、申请人为北京京东振世信息技术有限公司、发明名称为“用于检索图像的方法及装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本公开的实施例涉及计算机技术领域,具体涉及用于检索图像的方法和装置。
背景技术
目前,图像检索通常包括基于文本的图像检索和基于内容的图像检索。其中,基于文本的图像检索通常是利用文本描述的方式描述图像的特征,与图像库中的各图像对应的文本描述进行匹配以确定检索结果。基于内容的图像检索通常是根据图像的颜色、纹理、布局等特征,与图像库中的各图像对应的颜色、纹理、布局等特征进行匹配以确定检索结果。
由于对图像的文本描述通常具有一定的主观性,从而会影响检索结果的准确性。而由于原图才具有比较丰富的颜色、纹理等特征,因此现有的一些基于内容的图像检索,通常需要用户提供待检索物品的原图。另外,从图像提取出的颜色、纹理等特征一般都是图像的客观性描述信息,而较难表达图像的语义信息。
发明内容
本公开的实施例提出了用于检索图像的方法和装置。
第一方面,本公开的实施例提供了一种用于检索图像的方法,该方法包括:获取对目标物品的草图进行特征提取得到的第一矩阵;获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵,其 中,关键词集中的关键词用于描述目标物品;获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集;对于第三矩阵集中的第三矩阵,根据第一矩阵与该第三矩阵的匹配度,以及第二矩阵与该第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度;基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像。
在一些实施例中,根据第一矩阵与该第三矩阵的匹配度,以及第二矩阵与第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度,包括:获取第一矩阵的第一预设权重,以及获取第二矩阵的第二预设权重;基于所获取的第一预设权重和第二预设权重,根据第一矩阵与该第三矩阵的匹配度和第二矩阵与该第三矩阵的匹配度的加权和,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度。
在一些实施例中,第一矩阵与该第三矩阵的匹配度,以及第二矩阵与第三矩阵的匹配度通过如下步骤确定:分别以第一矩阵、第二矩阵、该第三矩阵作为目标矩阵,对目标矩阵进行编码处理,得到的第一编码矩阵、第二编码矩阵、第三编码矩阵,其中,编码处理用于将目标矩阵映射为二进制编码矩阵;确定第一编码矩阵与第三编码矩阵的匹配度作为第一矩阵与该第三矩阵的匹配度,以及确定第二编码矩阵与第三编码矩阵的匹配度作为第二矩阵与该第三矩阵的匹配度。
在一些实施例中,编码处理包括:对于目标矩阵的各行向量中的行向量S,执行如下步骤:将S包括的各元素拆分成C组,其中,C表示编码矩阵的列数;对于C组中的组,确定该组包括的元素的取值的统计特征;响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
在一些实施例中,将S包括的各元素拆分成C组,包括:确定S包括的元素的数目与C的商值,以及根据所确定的商值确定C组中的各组包括的元素的数目。
Figure PCTCN2020080263-appb-000001
在一些实施例中,编码处理包括:对于目标矩阵的各行向量分别进行如下更新处理以得到更新后的目标矩阵:对行向量进行归一化处理,以及根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值,其中,行向量包含的各元素分别对应的更新值与各元素分别对应的归一化结果正相关;对于更新后的目标矩阵的各行向量中的行向量S,执行如下步骤:将S包括的各元素拆分成C组,其中,C表示编码矩阵的列数;对于C组中的组,确定该组包括的元素的取值的统计特征;响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
在一些实施例中,根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值,包括:根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值,其中,行向量包含的各元素分别对应的更新值与λ正相关。
在一些实施例中,根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值,包括:对于行向量包含的各元素中的元素,确定该元素对应的归一化结果与λ的乘积的平方根作为该元素对应的更新值。
在一些实施例中,第一矩阵通过如下步骤得到:将草图拆分成至少两个子图像;利用预先训练的卷积神经网络分别对至少两个子图像进行特征提取,得到至少两个子图像分别对应的特征向量;确定至少两个子图像分别对应的特征向量组成的矩阵作为第一矩阵。
在一些实施例中,卷积神经网络通过如下步骤训练得到:获取草图集,以及获取草图集中的各个草图分别对应的匹配图像集,其中,草图与对应的匹配图像集中的匹配图像用于呈现相同物品;从草图集中选取草图,以及执行如下训练步骤:利用初始模型对选取的草图以及目标图像集中的各个图像分别进行特征提取以得草图以及目标图像 集中的各个图像分别对应的输出矩阵;确定得到的草图对应的输出矩阵分别与目标图像集中的各个图像分别对应的输出矩阵的匹配度,以及选取对应的匹配度大于预设阈值的图像;根据选取的图像和输入的草图对应的匹配图像集,确定选取的图像对应的查全率和/或查准率,以及根据确定的查全率和/或查准率,确定初始模型是否训练完成;响应于确定初始模型训练完成,确定训练完成的初始模型作为卷积神经网络;响应于确定初始模型未训练完成,根据确定的查全率和/或查准率,调整初始模型的参数,以及将调整后的初始模型确定为初始模型,从草图集中重新选取草图,继续执行上述训练步骤。
第二方面,本公开的实施例提供了一种用于检索图像的装置,该装置包括:获取单元,被配置成获取对目标物品的草图进行特征提取得到的第一矩阵;获取单元,进一步被配置成获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵,其中,关键词集中的关键词用于描述目标物品;获取单元,进一步被配置成获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集;确定单元,被配置成对于第三矩阵集中的第三矩阵,根据第一矩阵与该第三矩阵的匹配度,以及第二矩阵与该第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度;发送单元,被配置成基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像。
在一些实施例中,确定单元进一步被配置成:获取第一矩阵的第一预设权重,以及获取第二矩阵的第二预设权重;基于所获取的第一预设权重和第二预设权重,根据第一矩阵与该第三矩阵的匹配度和第二矩阵与该第三矩阵的匹配度的加权和,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度。
在一些实施例中,第一矩阵与该第三矩阵的匹配度,以及第二矩阵与第三矩阵的匹配度通过如下步骤确定:分别以第一矩阵、第二矩阵、该第三矩阵作为目标矩阵,对目标矩阵进行编码处理,得到的第一编码矩阵、第二编码矩阵、第三编码矩阵,其中,编码处理用于将目标矩阵映射为二进制编码矩阵;确定第一编码矩阵与第三编码矩阵的匹配度作为第一矩阵与该第三矩阵的匹配度,以及确定第二编码矩 阵与第三编码矩阵的匹配度作为第二矩阵与该第三矩阵的匹配度。
在一些实施例中,编码处理包括:对于目标矩阵的各行向量中的行向量S,执行如下步骤:将S包括的各元素拆分成C组,其中,C表示编码矩阵的列数;对于C组中的组,确定该组包括的元素的取值的统计特征;响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
在一些实施例中,将S包括的各元素拆分成C组,包括:确定S包括的元素的数目与C的商值,以及根据所确定的商值确定C组中的各组包括的元素的数目。
Figure PCTCN2020080263-appb-000002
在一些实施例中,编码处理包括:对于目标矩阵的各行向量分别进行如下更新处理以得到更新后的目标矩阵:对行向量进行归一化处理,以及根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值,其中,行向量包含的各元素分别对应的更新值与各元素分别对应的归一化结果正相关;对于更新后的目标矩阵的各行向量中的行向量S,执行如下步骤:将S包括的各元素拆分成C组,其中,C表示编码矩阵的列数;对于C组中的组,确定该组包括的元素的取值的统计特征;响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
在一些实施例中,根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值,包括:根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值,其中,行向量包含的各元素分别对应的更新值与λ正相关。
在一些实施例中,根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值,包括:对于行向量包含 的各元素中的元素,确定该元素对应的归一化结果与λ的乘积的平方根作为该元素对应的更新值。
在一些实施例中,第一矩阵通过如下步骤得到:将草图拆分成至少两个子图像;利用预先训练的卷积神经网络分别对至少两个子图像进行特征提取,得到至少两个子图像分别对应的特征向量;确定至少两个子图像分别对应的特征向量组成的矩阵作为第一矩阵。
在一些实施例中,卷积神经网络通过如下步骤训练得到:获取草图集,以及获取草图集中的各个草图分别对应的匹配图像集,其中,草图与对应的匹配图像集中的匹配图像用于呈现相同物品;从草图集中选取草图,以及执行如下训练步骤:利用初始模型对选取的草图以及目标图像集中的各个图像分别进行特征提取以得草图以及目标图像集中的各个图像分别对应的输出矩阵;确定得到的草图对应的输出矩阵分别与目标图像集中的各个图像分别对应的输出矩阵的匹配度,以及选取对应的匹配度大于预设阈值的图像;根据选取的图像和输入的草图对应的匹配图像集,确定选取的图像对应的查全率和/或查准率,以及根据确定的查全率和/或查准率,确定初始模型是否训练完成;响应于确定初始模型训练完成,确定训练完成的初始模型作为卷积神经网络;响应于确定初始模型未训练完成,根据确定的查全率和/或查准率,调整初始模型的参数,以及将调整后的初始模型确定为初始模型,从草图集中重新选取草图,继续执行上述训练步骤。
第三方面,本公开的实施例提供了一种电子设备,该电子设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序;当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如第一方面中任一实现方式描述的方法。
第四方面,本公开的实施例提供了一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如第一方面中任一实现方式描述的方法。
本公开的实施例提供的用于检索图像的方法和装置,根据物品的草图和对应的关键字分别与图像集中的各图像进行匹配,并根据匹配结果确定检索结果,从而在用户无法提供物品的原图进行检索时,可 以利用物品的草图实现检索,而且由于同时利用了物品的关键字进行检索,从而使得检索过程了融合了图像的语义信息,有助于降低图像的误检率和漏检率,从而提升检索结果的准确性。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:
图1是本公开的一个实施例可以应用于其中的示例性系统架构图;
图2是根据本公开的用于检索图像的方法的一个实施例的流程图;
图3是根据本公开的实施例的用于检索图像的方法的一个应用场景的示意图;
图4是根据本公开的用于检索图像的方法的又一个实施例的流程图;
图5是根据本公开的用于检索图像的装置的一个实施例的结构示意图;
图6是适于用来实现本公开的实施例的电子设备的结构示意图。
具体实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。
图1示出了可以应用本公开的用于检索图像的方法或用于检索图像的装置的实施例的示例性架构100。
如图1所示,系统架构100可以包括终端设备101、102、103, 网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种客户端应用。例如,浏览器类应用、搜索类应用、图像处理类应用等等。
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以是提供各种服务的服务器,例如为终端设备101、102、103上安装的客户端应用提供后端支持的后端服务器。服务器105可以接收终端设备101、102、103发送的目标物品的草图和关键词集,并对目标物品的草图和关键词集分别进行处理,进而根据处理结果从图像集中选取与目标物品的草图和关键词集匹配的图像,并将选取的图像发送至终端设备101、102、103。
需要说明的是,上述目标物品的草图和关键词集也可以直接存储在服务器105的本地,服务器105可以直接提取本地所存储的目标物品的草图和关键词集并进行处理,此时,可以不存在终端设备101、102、103和网络104)。
需要说明的是,本公开的实施例所提供的用于检索图像的方法一般由服务器105执行,相应地,用于检索图像的装置一般设置于服务器105中。
还需要指出的是,终端设备101、102、103中也可以安装有图像处理类应用,终端设备101、102、103也可以基于图像处理类应用对人脸图像进行处理,此时,用于检索图像的方法也可以由终端设备101、102、103执行,相应地,用于检索图像的装置也可以设置于终 端设备101、102、103中。此时,示例性系统架构100可以不存在服务器105和网络104。
需要说明的是,服务器105可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器105为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,其示出了根据本公开的用于检索图像的方法的一个实施例的流程200。该用于检索图像的方法包括以下步骤:
步骤201,获取对目标物品的草图进行特征提取得到的第一矩阵。
在本实施例中,目标物品可以是用户的检索目标,即用户期望检索到的图像呈现有的物品。目标物品的草图可以用于初始化表达物品的设计或者形体概念。例如,目标物品的草图中可以呈现有物品的结构和尺寸,以及物品的各部件的相对位置关系等。
在本实施例中,可以由用户绘制目标物品的草图,也可以由用户从现有的一些草图库(如Sketchy图像库)中选取目标物品的草图。
在本实施例中,对草图进行特征提取可以指提取草图的一些图像信息。一般地,可以通过对草图进行分析处理以确定草图的各个像素点是否可以表达草图的某个特征。具体地,可以利用现有的各种图像特征提取方法对草图进行特征提取。
例如,可以利用基于SURF(尺度不变特征变换)的特征提取方法提取目标物品的草图的特征。又例如,可以利用基于深度学习的特征提取方法提取目标物品的草图的特征。
可选地,对草图的特征提取结果可以是一特征向量。此时,提取得到的特征向量即可视为上述第一矩阵。
在本实施例的一些可选的实现方式,可以先将草图拆分成至少两个子图像。然后利用预先训练的卷积神经网络分别对至少两个子图像 进行特征提取,可以得到至少两个子图像分别对应的特征向量,进而可以将至少两个子图像分别对应的特征向量组成的矩阵视为上述第一矩阵。
其中,对草图的拆分方法可以灵活选择。例如,可以以草图的几何中心作为中心点,从水平和垂直两个方向,将草图平均拆分成四个子图像。
其中,得到的至少两个子图像分别对应的特征向量组成矩阵的方式可以由技术人员预先设置。例如,按照指定的顺序依次按行排列得到第一矩阵。
通过拆分草图的方式,可以使得之后的匹配过程有针对性的对对应位置的图像区域进行匹配,即使得匹配过程具有更加准确的位置信息,有助于增加匹配结果的准确度,进而增加检索结果的准确度。
在利用卷积神经网络对目标物品的草图进行特征提取时,卷积神经网络可以是各种类型的、预先训练完成的用于提取图像的特征的神经网络(如深度学习模型等)。
一般地,卷积神经网络可以由若干卷积层、池化层、全连接层组成。其中,卷积层用于对输入卷积层的图像进行卷积操作以提取特征,池化层用于对卷积层的输出结果进行压缩以提取主要特征,全连接层可以将提取得到的图像的各个局部特征进行整合,以将全连接层之前各层学习到的分布式特征表示映射到样本标记空间。
可选地,用于提取图像的特征的卷积神经网络可以通过如下步骤训练得到:
步骤一,获取草图集,以及获取草图集中的各个草图分别对应的匹配图像集。
在本步骤中,可以利用各种图像处理应用生成大量草图,从而组成草图集,也可以从第三方数据平台获取草图集。草图集对应的匹配图像集中的匹配图像和草图可以用于呈现相同物品。其中,对于任一草图,该草图对应的匹配图像集中的匹配图像可以由技术人员指定,也可以从第三方数据平台获取。
步骤二,从草图集中选取草图,以及执行如下的训练步骤一至训 练步骤三:
在本步骤中,从草图集中选取草图的方式可以根据不同的应用场景灵活设置。例如,可以从草图集中随机选取预设数目的草图。又例如,可以从草图集中选取未被选取的、预设数目的草图。
训练步骤一,利用初始模型对选取的草图以及目标图像集中的各个图像分别进行特征提取以得草图以及目标图像集中的各个图像分别对应的输出矩阵。
其中,初始模型可以是各种类型的未训练的或未训练完成的人工神经网络。例如,深度学习模型。初始模型也可以是对多种未经训练的或未训练完成的人工神经网络进行组合得到的模型。具体地,技术人员可以根据实际的应用需求(如卷积层的层数、卷积核的大小等)构建初始模型。
其中,目标图像集可以由技术人员预先设置。可选地,目标图像集可以是上述图像集。
训练步骤二,确定得到的草图对应的输出矩阵分别与目标图像集中的各个图像分别对应的输出矩阵的匹配度,以及选取对应的匹配度大于预设阈值的图像。
其中,两个输出矩阵的匹配度的计算方法可以采用现有的各种矩阵匹配算法。例如,可以按照预设方式分别将两个矩阵展平成向量,然后计算得到的两个向量之间的相似度作为两个输出矩阵的匹配度。
其中,预设阈值可以由技术人员根据实际的应用需求预先设置。
训练步骤三,根据选取的图像和输入的草图对应的匹配图像集,确定选取的图像对应的查全率和/或查准率,以及根据确定的查全率和/或查准率,确定初始模型是否训练完成。
其中,查全率可以用于表征所期望的图像的被检出程度。一般地,查全率可以利用选取的图像和匹配图像集的交集所包括的图像数目与目标图像集包括的、与输入的草图呈现有相同物品的图像的总数目的比值来表示。
其中,查准率可以用于表征检索出的期望图像占检索出的所有图像的百分比。一般地,查准率可以利用选取的图像和匹配图像集的交 集所包括的图像数目与匹配图像集包括的图像的总数目的比值来表示。
可选地,在确定了查全率和/或查准率之后,可以确定预设的损失函数的值,并根据确定的损失函数的值确定初始模型是否训练完成。其中,损失函数的计算方式可以由技术人员预先设置。例如,预设的损失函数可以用于表征确定的查全率和/或查准率与预设的查全率和/或查准率的差异程度。此时,可以根据确定的损失函数的值是否小于预设损失阈值来确定初始模型是否训练完成。
若根据确定的查全率和/或查准率,确定初始模型训练完成,即可以确定训练完成的初始模型作为上述用于提取图像的特征的卷积神经网络。
若根据确定的查全率和/或查准率,确定初始模型未训练完成,可以根据确定的查全率和/或查准率,调整初始模型的参数,并将调整后的初始模型确定为初始模型,从草图集中重新选取草图,继续执行上述训练步骤一至训练步骤三。
具体地,可以根据损失函数的值,利用梯度下降和反向传播算法来调整初始模型各层的参数,以使得调整后的初始模型对应的查全率和/或查准率尽可能较高。
在本实施例中,可以由其他电子设备预先对目标物品的草图进行特征提取得到第一矩阵。此时,用于检索图像的方法的执行主体(如图1所示的服务器105)可以从其他电子设备获取第一矩阵。可以理解的是,也可以由上述执行主体预先对目标物品的草图进行特征提取得到第一矩阵。此时,上述执行主体可以直接从本地获取第一矩阵。
步骤202,获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵。
在本实施例中,关键词集中的关键词可以用于描述目标物品。关键词集中的关键词可以由用户预先设置。其中,关键词集中的关键词的词向量可以利用现有的各种用于生成词向量的方法(如Word2Vec、FastText等)来确定。
其中,关键词集中的各个关键词的词向量组成第二矩阵的方式可 以由技术人员预先设置。例如,可以按照预设顺序,将各个关键词分别对应的词向量依次按行排列得到上述的第二矩阵。
在本实施例中,可以由其他电子设备预先生成关键词集中的各个关键词的词向量,然后得到第二矩阵。此时,上述执行主体可以从其他电子设备获取第二矩阵。可以理解的是,也可以由上述执行主体预先生成关键词集中的各个关键词的词向量,然后得到第二矩阵。此时,上述执行主体可以直接从本地获取第二矩阵。
可选地,在预先生成关键词的词向量之后,可以存储关键词与词向量之间的对应关系,以便于下次再用到时,可以直接使用关键词对应的词向量,有助于提升图像检索速度。此时,若词向量是通过神经网络(如Word2Vec等)得到的,那么可以一定时间间隔后,利用新的关键词以及对应的词向量重新训练神经网络,以对神经网络也进行更新。
步骤203,获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集。
在本实施例中,可以利用现有的各种图像特征提取方法对图像集中的各个图像分别进行特征提取。例如,可以利用基于SURF(尺度不变特征变换)的特征提取方法提取图像集中的各个图像的特征。又例如,可以利用基于深度学习的特征提取方法提取图像集中的各个图像的特征。
可选地,可以利用同一卷积神经网络分别对目标物品的草图和图像集中的各个图像进行特征提取,以得到目标物品的草图对应的第一矩阵和图像集中的各个图像分别对应的第三矩阵。
可选地,由于图像集所包括的图像一般是海量的,而且图像集的更新频率通常是较低的。因此,可以在预先对图像集中的图像进行特征提取,得到各个图像分别对应的第三矩阵之后,存储各个图像与对应的第三矩阵之间的对应关系,使得之后可以直接使用各个图像对应的第三矩阵,不需要再次对各个图像进行处理以得到各个图像分别对应的第三矩阵,有助于提升图像检索速度。而在图像集被更新时,可以进一步存储更新部分与对应的第三矩阵之间的对应关系。此时,若 第三矩阵是利用卷积神经网络得到的,那么在图像集被更新时,还可以利用更新部分对卷积神经网络进一步训练,以对卷积神经网络也进行更新。
步骤204,对于第三矩阵集中的第三矩阵,根据第一矩阵与该第三矩阵的匹配度,以及第二矩阵与第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度。
在本实施例中,可以利用现有的各种矩阵匹配算法计算第一矩阵与该第三矩阵的匹配度,以及第二矩阵与该第三矩阵的匹配度。进而可以基于得到的两个匹配度综合确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度。其中,具体地,基于得到的两个匹配度综合确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度的方法可以灵活设置。
可选地,可以确定两者中的最大值或者确定两者的平均值作为该第三矩阵对应的图像呈现的物品与目标物品的匹配度。
可选地,可以获取第一矩阵的第一预设权重,以及获取第二矩阵的第二预设权重。然后基于所获取的第一预设权重和第二预设权重,根据第一矩阵与该第三矩阵的匹配度和第二矩阵与该第三矩阵的匹配度的加权和,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度。
其中,第一预设权重和第二预设权重可以由技术人员预先设置,也可以由用户输入第一预设权重和第二预设权重。可选地,第一预设权重和第二权重的取值范围可以为[0-1],且第一预设权重和第二预设权重的和等于1。
可选地,可以确定第一矩阵与该第三矩阵的匹配度和第二矩阵与该第三矩阵的匹配度的加权和作为该第三矩阵对应的图像呈现的物品与目标物品的匹配度,也可以在确定加权和之后,将加权和对应的预设函数的取值作为该第三矩阵对应的图像呈现的物品与目标物品的匹配度。其中,预设函数可以由技术人员进行预先设置。
可以理解的是,在一些情况下,如用户设置第一预设权重为0或者第二预设权重为0时,则可以实现仅根据目标物品的草图或目标物 品的关键词集中的关键词实现图像检索。即用户可以根据实际的需求灵活设置不同的检索方式,以控制目标物品的草图和目标物品的关键词集中的关键词对检索结果的影响程度,有助于提升检索结果的准确性。
步骤205,基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像。
在本实施例中,预设数目可以由技术人员预先设置。在得到图像集中的各个图像分别对应匹配度之后,从图像集中选取图像的方式可以灵活设置。
例如,可以按照对应的匹配度由大到小的顺序,从图像集中选取预设数目个图像。又例如,可以先根据预设的匹配度阈值从图像集中选取图像子集,然后随机从图像子集中选取预设数目个图像。
在本实施例中,可以将从图像集中选取的图像发送至其它电子设备。例如,可以发送至与上述执行主体连接的用户终端(如图1所示的终端设备101、102、103)。
可选地,还可以存储目标物品的草图与从图像集中选取的图像之间的对应关系。由此,在再获取到目标物品的草图时,可以根据存储的对应关系,快速获取与目标物品的草图匹配的、图像集中的图像。
继续参见图3,图3是根据本实施例的用于检索图像的方法的应用场景的一个示意图300。在图3的应用场景中,上述执行主体可以预先获取用户通过其所使用的终端设备308输入的草图301,然后以草图301的几何中心为中心点,从水平和垂直两个方向,将草图301拆分为子图像3011、子图像3012、子图像3013和子图像3014。之后,可以将得到的四个子图像分别输入至预先训练的卷积神经网络,以得到四个子图像分别对应的特征向量,并由四个子图像分别对应的特征向量组成第一矩阵302。
上述执行主体可以预先获取用户通过终端设备308输入的关键词集303。其中,关键词集303包括“水杯”、“小容量”、“无盖”和“带手柄”共四个关键词。之后,可以利用预先训练的Word2Vec模型生成四个关键词分别对应的词向量,进而得到由四个关键词分别对应的 词向量组成的第二矩阵304。
上述执行主体可以预先对图像集305中的各个图像进行处理,以得到各个图像分别对应的第三矩阵,得到第三矩阵集306。其中,对图像集305中的图像的处理过程与对上述草图301的处理过程类似。以图像集305中的一个图像作为示例进行说明:以该图像的几何中心为中心点,从水平和垂直两个方向,将该图像拆分为四个子图像。之后,可以将得到的四个子图像分别输入至预先训练的卷积神经网络,以得到四个子图像分别对应的特征向量,并由四个子图像分别对应的特征向量组成该图像对应的第三矩阵。
之后,可以确定第三矩阵集306中的各个第三矩阵分别对应的综合匹配度。以第三矩阵集306中的一个第三矩阵作为示例进行说明:可以分别确定该第三矩阵与第一矩阵302的匹配度作为第一匹配度,同时确定该第三矩阵与第二矩阵304的匹配度作为第二匹配度。然后根据预先设置的第一权重和第二权重,确定第一匹配度和第二匹配度的加权和作为该第三矩阵对应的综合匹配度。
之后,可以按照对应的匹配度由大到小的顺序,从图像集305中选取预设数目个图像作为目标图像,得到目标图像集307,并将目标图像集307推送至用户使用的终端设备308进行显示。
本公开的上述实施例提供的用于检索图像的方法实现了基于物品的草图和关键词进行检索,从而可以避免在用户无法提供物品的原图而造成的无法进行检索或检索结果准确度较低的情况。同时,通过结合草图提供的物品的尺寸、结构等特征,以及关键字提供的物品的语义特征,以保证检索结果的准确性。
进一步参考图4,其示出了用于检索图像的方法的又一个实施例的流程400。该用于检索图像的方法的流程400,包括以下步骤:
步骤401,获取对目标物品的草图进行特征提取得到的第一矩阵。
步骤402,获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵。
步骤403,获取对图像集中的各个图像分别进行特征提取得到的 第三矩阵集。
上述步骤401、402和403的具体的执行过程可参考图2对应实施例中的步骤201、202和203的相关说明,在此不再赘述。
步骤404,对于第三矩阵集中的第三矩阵,分别以第一矩阵、第二矩阵、该第三矩阵作为目标矩阵,对目标矩阵进行编码处理,得到的第一编码矩阵、第二编码矩阵、第三编码矩阵。
在本实施例中,编码处理可以用于将目标矩阵映射为二进制编码矩阵。其中,二进制编码矩阵可以指包括的元素取值为“0”和“1”的矩阵。
可选地,编码处理可以包括:先将目标矩阵转换为预设维度的矩阵,然后将矩阵中的各个元素进行归一化处理,使得矩阵包括的各个元素的取值范围为[0-1]。之后,可以将取值大于预设标准值的元素的编码值设置为“1”,以及将取值不大于预设标准值的元素的编码值设置为“0”。其中,预设维度和预设标准值都可以由技术人员预先进行设置。
其中,可以利用现有的一些数据处理应用将目标矩阵转换为预设维度的矩阵,也可以根据预设维度,设置池化窗口,对目标进行池化操作(如平均池化),从而将目标矩阵转换为预设维度的矩阵。
通过将第一矩阵、第二矩阵、第三矩阵进行编码处理,可以控制对应生成的第一编码矩阵、第二编码矩阵、第三编码矩阵的维度,而且使得第一编码矩阵、第二编码矩阵、第三编码矩阵为二进制编码矩阵,从而可以降低之后进行矩阵匹配的难度,较大地提升矩阵匹配的速度。
可选地,编码处理可以包括:
(1)对于目标矩阵的各行向量中的行向量S,可以执行如下步骤:
步骤一,将S包括的各元素拆分成C组。其中,C可以表示编码矩阵的列数。
在本步骤中,C可以由技术人员预先设置。拆分得到的各组分别包含的元素的数目也可以由技术人员预先设置。
可选地,可以先确定S包括的元素的数目与C的商值,然后根据 所确定的商值确定C组中的各组包括的元素的数目。
例如,可以使得尽可能多的组包括的元素数目等于确定的商值的上取整或下取整的结果。
步骤二,对于C组中的组,确定该组包括的元素的取值的统计特征。
在本步骤中,统计特征包括但不限于总和、期望、方差、最大值、标准差中的一种。具体采用哪种统计特征可以由技术人员根据不同的应用场景进行选择。
步骤三,响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0。
在本步骤中,目标阈值T可以由技术人员预先进行设置。
可选地,
Figure PCTCN2020080263-appb-000003
其中,D表示S可以包括的元素的数目,S i可以表示S的第i个元素的取值。
(2)由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
根据不同的应用场景,通过控制各行向量拆分的各组包括的元素数目,并将各组分别对应的统计特征作为统计特征实现编码处理,有助于保留更多的原始信息,从而提升之后的矩阵匹配和图像检索的准确度。
可选地,编码处理可以包括:
第一,对于目标矩阵的各行向量分别进行如下更新处理以得到更新后的目标矩阵:对行向量进行归一化处理,以及根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值。其中,行向量包含的各元素分别对应的更新值与各元素分别对应的归一化结果正相关。
其中,归一化处理具体可以包括:先确定行向量包括的各元素的取值的总和,然后确定行向量包含的各元素分别与确定的总和的商值作为各元素对应的归一化结果。
可选地,可以直接将各元素对应的归一化结果作为各元素分别对 应的更新值。
可选地,可以根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值。其中,行向量包含的各元素分别对应的更新值可以与λ正相关。
例如,对于行向量包含的各元素中的元素,可以确定该元素对应的归一化结果与λ的乘积作为该元素对应的更新值。又例如,对于行向量包含的各元素中的元素,可以确定该元素对应的归一化结果与λ的乘积的平方根作为该元素对应的更新值。
第二,对于更新后的目标矩阵的各行向量中的行向量S,执行如下步骤:将S包括的各元素拆分成C组,其中,C表示编码矩阵的列数;对于C组中的组,确定该组包括的元素的取值的统计特征;响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;
第三,可以由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
其中,上述步骤第二和第三的具体的执行过程可参考上述步骤(2)和(3)的相关说明,在此不再赘述。
通过先将第一矩阵、第二矩阵、第三矩阵的各行向量进行归一化处理以更新第一矩阵、第二矩阵、第三矩阵,可以降低第一矩阵、第二矩阵、第三矩阵的噪声,提升第一矩阵、第二矩阵、第三矩阵的普适性和稳定性,进而可以保证之后的矩阵匹配过程的准确度。
步骤405,确定第一编码矩阵与第三编码矩阵的匹配度作为第一矩阵与该第三矩阵的匹配度,以及确定第二编码矩阵与第三编码矩阵的匹配度作为第二矩阵与该第三矩阵的匹配度。
步骤406,基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像。
本步骤的具体的执行过程可参考图2对应实施例中的步骤205的相关说明,在此不再赘述。
需要说明的是,本公开中的目标矩阵(包括第一矩阵、第二矩阵、第三矩阵集中的第三矩阵)的具体组成方式是可以灵活设置的。例如, 在目标矩阵为向量时,其可以是行向量,也可以是列向量。在目标矩阵由若干个向量组成时,可以将各个向量按行组成目标矩阵,也可以将各个向量按列组成目标矩阵。而对于矩阵来说,一个矩阵的行即为该矩阵的转置矩阵的列。因此,本公开的“行”也可以替换为“列”,对应的“列”也可以替换为“行”。
从图4中可以看出,与图2对应的实施例相比,本实施例中的用于检索图像的方法的流程400突出了在矩阵匹配过程中,可以对矩阵进行编码处理,从而控制用于匹配计算的矩阵的维度和计算量,从而可以降低矩阵匹配过程的难度和计算量,增加匹配速度,从而提升图像检索速度。
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了用于检索图像的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,本实施例提供的用于检索图像的装置500包括获取单元501、确定单元502和发送单元503。其中,获取单元501被配置成获取对目标物品的草图进行特征提取得到的第一矩阵;获取单元501进一步被配置成获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵,其中,关键词集中的关键词用于描述目标物品;获取单元501进一步被配置成获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集;确定单元502被配置成对于第三矩阵集中的第三矩阵,根据第一矩阵与该第三矩阵的匹配度,以及第二矩阵与该第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度;发送单元503被配置成基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像。
在本实施例中,用于检索图像的装置500中:获取单元501、确定单元502和发送单元503的具体处理及其所带来的技术效果可分别参考图2对应实施例中的步骤201、步骤202、步骤203、步骤204和步骤205的相关说明,在此不再赘述。
在本实施例的一些可选的实现方式中,确定单元502进一步被配 置成:获取第一矩阵的第一预设权重,以及获取第二矩阵的第二预设权重;基于所获取的第一预设权重和第二预设权重,根据第一矩阵与该第三矩阵的匹配度和第二矩阵与该第三矩阵的匹配度的加权和,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度。
在本实施例的一些可选的实现方式中,第一矩阵与该第三矩阵的匹配度,以及第二矩阵与第三矩阵的匹配度通过如下步骤确定:分别以第一矩阵、第二矩阵、该第三矩阵作为目标矩阵,对目标矩阵进行编码处理,得到的第一编码矩阵、第二编码矩阵、第三编码矩阵,其中,编码处理用于将目标矩阵映射为二进制编码矩阵;确定第一编码矩阵与第三编码矩阵的匹配度作为第一矩阵与该第三矩阵的匹配度,以及确定第二编码矩阵与第三编码矩阵的匹配度作为第二矩阵与该第三矩阵的匹配度。
在本实施例的一些可选的实现方式中,编码处理包括:对于目标矩阵的各行向量中的行向量S,执行如下步骤:将S包括的各元素拆分成C组,其中,C表示编码矩阵的列数;对于C组中的组,确定该组包括的元素的取值的统计特征;响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
在本实施例的一些可选的实现方式中,将S包括的各元素拆分成C组,包括:确定S包括的元素的数目与C的商值,以及根据所确定的商值确定C组中的各组包括的元素的数目。
Figure PCTCN2020080263-appb-000004
在本实施例的一些可选的实现方式中,编码处理包括:对于目标矩阵的各行向量分别进行如下更新处理以得到更新后的目标矩阵:对行向量进行归一化处理,以及根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值,其中,行向量包含的各元素分别对应的更新值与各元素分别对应的归一化结果正相关;对于更新后的目标矩阵的各行向量中的行向量S,执行如下步骤:将S包括的各元素 拆分成C组,其中,C表示编码矩阵的列数;对于C组中的组,确定该组包括的元素的取值的统计特征;响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
在本实施例的一些可选的实现方式中,根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值,包括:根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值,其中,行向量包含的各元素分别对应的更新值与λ正相关。
在本实施例的一些可选的实现方式中,根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值,包括:对于行向量包含的各元素中的元素,确定该元素对应的归一化结果与λ的乘积的平方根作为该元素对应的更新值。
在本实施例的一些可选的实现方式中,第一矩阵通过如下步骤得到:将草图拆分成至少两个子图像;利用预先训练的卷积神经网络分别对至少两个子图像进行特征提取,得到至少两个子图像分别对应的特征向量;确定至少两个子图像分别对应的特征向量组成的矩阵作为第一矩阵。
在本实施例的一些可选的实现方式中,卷积神经网络通过如下步骤训练得到:获取草图集,以及获取草图集中的各个草图分别对应的匹配图像集,其中,草图与对应的匹配图像集中的匹配图像用于呈现相同物品;从草图集中选取草图,以及执行如下训练步骤:利用初始模型对选取的草图以及目标图像集中的各个图像分别进行特征提取以得草图以及目标图像集中的各个图像分别对应的输出矩阵;确定得到的草图对应的输出矩阵分别与目标图像集中的各个图像分别对应的输出矩阵的匹配度,以及选取对应的匹配度大于预设阈值的图像;根据选取的图像和输入的草图对应的匹配图像集,确定选取的图像对应的查全率和/或查准率,以及根据确定的查全率和/或查准率,确定初始模型是否训练完成;响应于确定初始模型训练完成,确定训练完成的初始模型作为卷积神经网络;响应于确定初始模型未训练完成,根据确 定的查全率和/或查准率,调整初始模型的参数,以及将调整后的初始模型确定为初始模型,从草图集中重新选取草图,继续执行上述训练步骤。
本公开的上述实施例提供的装置,通过获取单元获取对目标物品的草图进行特征提取得到的第一矩阵;获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵,其中,关键词集中的关键词用于描述目标物品;获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集;确定单元对于第三矩阵集中的第三矩阵,根据第一矩阵与该第三矩阵的匹配度,以及第二矩阵与该第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度;发送单元基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像,从而可以避免在用户无法提供物品的原图而造成的无法进行检索或检索结果准确度较低的情况。同时,通过结合草图提供的物品的尺寸、结构等特征,以及关键字提供的物品的语义特征,以保证检索结果的准确性。
下面参考图6,其示出了适于用来实现本公开的实施例的电子设备(例如图1中的服务器)600的结构示意图。本公开的实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的服务器仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图6中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的实施例的方法中限定的上述功能。
需要说明的是,本公开的实施例所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开的实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号 或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取对目标物品的草图进行特征提取得到的第一矩阵;获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵,其中,关键词集中的关键词用于描述目标物品;获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集;对于第三矩阵集中的第三矩阵,根据第一矩阵与该第三矩阵的匹配度,以及第二矩阵与该第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与目标物品的匹配度;基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的实施例的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码 的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开的实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、确定单元和发送单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,发送单元还可以被描述为“基于所确定的匹配度,从图像集中选取预设数目个图像,以及发送所选取的图像的单元”。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开的实施例中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开的实施例中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (14)

  1. 一种用于检索图像的方法,包括:
    获取对目标物品的草图进行特征提取得到的第一矩阵;
    获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵,其中,所述关键词集中的关键词用于描述所述目标物品;
    获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集;
    对于所述第三矩阵集中的第三矩阵,根据所述第一矩阵与该第三矩阵的匹配度,以及所述第二矩阵与该第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与所述目标物品的匹配度;
    基于所确定的匹配度,从所述图像集中选取预设数目个图像,以及发送所选取的图像。
  2. 根据权利要求1所述的方法,其中,所述根据所述第一矩阵与该第三矩阵的匹配度,以及所述第二矩阵与所述第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与所述目标物品的匹配度,包括:
    获取所述第一矩阵的第一预设权重,以及获取所述第二矩阵的第二预设权重;
    基于所获取的第一预设权重和第二预设权重,根据所述第一矩阵与该第三矩阵的匹配度和所述第二矩阵与该第三矩阵的匹配度的加权和,确定该第三矩阵对应的图像呈现的物品与所述目标物品的匹配度。
  3. 根据权利要求1所述的方法,其中,所述第一矩阵与该第三矩阵的匹配度,以及所述第二矩阵与所述第三矩阵的匹配度通过如下步骤确定:
    分别以所述第一矩阵、第二矩阵、该第三矩阵作为目标矩阵,对目标矩阵进行编码处理,得到的第一编码矩阵、第二编码矩阵、第三编码矩阵,其中,所述编码处理用于将目标矩阵映射为二进制编码矩 阵;
    确定所述第一编码矩阵与所述第三编码矩阵的匹配度作为所述第一矩阵与该第三矩阵的匹配度,以及确定所述第二编码矩阵与所述第三编码矩阵的匹配度作为所述第二矩阵与该第三矩阵的匹配度。
  4. 根据权利要求3所述的方法,其中,所述编码处理包括:对于目标矩阵的各行向量中的行向量S,执行如下步骤:
    将S包括的各元素拆分成C组,其中,C表示编码矩阵的列数;
    对于C组中的组,确定该组包括的元素的取值的统计特征;响应于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;
    由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
  5. 根据权利要求4所述的方法,其中,所述将S包括的各元素拆分成C组,包括:
    确定S包括的元素的数目与C的商值,以及根据所确定的商值确定C组中的各组包括的元素的数目。
  6. 根据权利要求4所述的方法,其中,
    Figure PCTCN2020080263-appb-100001
    其中,D表示S包括的元素的数目,S i表示S的第i个元素的取值。
  7. 根据权利要求3所述的方法,其中,所述编码处理包括:
    对于目标矩阵的各行向量分别进行如下更新处理以得到更新后的目标矩阵:对行向量进行归一化处理,以及根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值,其中,行向量包含的各元素分别对应的更新值与各元素分别对应的归一化结果正相关;
    对于更新后的目标矩阵的各行向量中的行向量S,执行如下步骤:
    将S包括的各元素拆分成C组,其中,C表示编码矩阵的列数;
    对于C组中的组,确定该组包括的元素的取值的统计特征;响应 于确定得到的统计特征大于目标阈值T,确定该组的编码值为1;响应于确定得到的统计特征小于T,确定该组的编码值为0;
    由C组中的各组分别对应的编码值组成编码矩阵的一行,以得到编码矩阵。
  8. 根据权利要求7所述的方法,其中,所述根据行向量的归一化结果,确定行向量包含的各元素分别对应的更新值,包括:
    根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值,其中,行向量包含的各元素分别对应的更新值与λ正相关。
  9. 根据权利要求8所述的方法,其中,所述根据行向量的归一化结果和预设调整参数λ,确定行向量包含的各元素分别对应的更新值,包括:
    对于行向量包含的各元素中的元素,确定该元素对应的归一化结果与λ的乘积的平方根作为该元素对应的更新值。
  10. 根据权利要求1所述的方法,其中,所述第一矩阵通过如下步骤得到:
    将所述草图拆分成至少两个子图像;
    利用预先训练的卷积神经网络分别对所述至少两个子图像进行特征提取,得到所述至少两个子图像分别对应的特征向量;
    确定所述至少两个子图像分别对应的特征向量组成的矩阵作为第一矩阵。
  11. 根据权利要求10所述的方法,其中,所述卷积神经网络通过如下步骤训练得到:
    获取草图集,以及获取草图集中的各个草图分别对应的匹配图像集,其中,草图与对应的匹配图像集中的匹配图像用于呈现相同物品;
    从草图集中选取草图,以及执行如下训练步骤:利用初始模型对 选取的草图以及目标图像集中的各个图像分别进行特征提取以得草图以及目标图像集中的各个图像分别对应的输出矩阵;确定得到的草图对应的输出矩阵分别与目标图像集中的各个图像分别对应的输出矩阵的匹配度,以及选取对应的匹配度大于预设阈值的图像;根据选取的图像和输入的草图对应的匹配图像集,确定选取的图像对应的查全率和/或查准率,以及根据确定的查全率和/或查准率,确定初始模型是否训练完成;
    响应于确定初始模型训练完成,确定训练完成的初始模型作为所述卷积神经网络;
    响应于确定初始模型未训练完成,根据确定的查全率和/或查准率,调整初始模型的参数,以及将调整后的初始模型确定为初始模型,从草图集中重新选取草图,继续执行上述训练步骤。
  12. 一种用于检索图像的装置,包括:
    获取单元,被配置成获取对目标物品的草图进行特征提取得到的第一矩阵;
    所述获取单元,进一步被配置成获取目标物品对应的关键词集中的关键词的词向量组成的第二矩阵,其中,所述关键词集中的关键词用于描述所述目标物品;
    所述获取单元,进一步被配置成获取对图像集中的各个图像分别进行特征提取得到的第三矩阵集;
    确定单元,被配置成对于所述第三矩阵集中的第三矩阵,根据所述第一矩阵与该第三矩阵的匹配度,以及所述第二矩阵与该第三矩阵的匹配度,确定该第三矩阵对应的图像呈现的物品与所述目标物品的匹配度;
    发送单元,被配置成基于所确定的匹配度,从所述图像集中选取预设数目个图像,以及发送所选取的图像。
  13. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-11中任一所述的方法。
  14. 一种计算机可读介质,其上存储有计算机程序,其中,该程序被处理器执行时实现如权利要求1-11中任一所述的方法。
PCT/CN2020/080263 2019-07-23 2020-03-19 用于检索图像的方法和装置 WO2021012691A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020227003264A KR20220018633A (ko) 2019-07-23 2020-03-19 이미지 검색 방법 및 장치
US17/628,391 US20220292132A1 (en) 2019-07-23 2020-03-19 METHOD AND DEVICE FOR RETRIEVING IMAGE (As Amended)
JP2022504246A JP2022541832A (ja) 2019-07-23 2020-03-19 画像を検索するための方法及び装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910665039.8 2019-07-23
CN201910665039.8A CN112307243B (zh) 2019-07-23 2019-07-23 用于检索图像的方法和装置

Publications (1)

Publication Number Publication Date
WO2021012691A1 true WO2021012691A1 (zh) 2021-01-28

Family

ID=74192931

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080263 WO2021012691A1 (zh) 2019-07-23 2020-03-19 用于检索图像的方法和装置

Country Status (5)

Country Link
US (1) US20220292132A1 (zh)
JP (1) JP2022541832A (zh)
KR (1) KR20220018633A (zh)
CN (1) CN112307243B (zh)
WO (1) WO2021012691A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098721A (zh) * 2022-08-23 2022-09-23 浙江大华技术股份有限公司 一种人脸特征检索方法、装置及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102542220B1 (ko) * 2022-09-19 2023-06-13 아주대학교 산학협력단 자가 지식 증류법 기반 의미론적 영상 분할 방법 및 자가 지식 증류법 기반 의미론적 영상 분할 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339306A (zh) * 2010-08-31 2012-02-01 微软公司 基于草图的图像搜索
US20120072410A1 (en) * 2010-09-16 2012-03-22 Microsoft Corporation Image Search by Interactive Sketching and Tagging
CN105718531A (zh) * 2016-01-14 2016-06-29 广州市万联信息科技有限公司 图像数据库的建立方法及图像识别方法
CN108595636A (zh) * 2018-04-25 2018-09-28 复旦大学 基于深度跨模态相关性学习的手绘草图的图像检索方法
CN109145140A (zh) * 2018-09-08 2019-01-04 中山大学 一种基于手绘轮廓图匹配的图像检索方法及系统

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5983237A (en) * 1996-03-29 1999-11-09 Virage, Inc. Visual dictionary
KR100451649B1 (ko) * 2001-03-26 2004-10-08 엘지전자 주식회사 이미지 검색방법과 장치
US8559671B2 (en) * 2008-12-18 2013-10-15 The Regents Of The University Of California Training-free generic object detection in 2-D and 3-D using locally adaptive regression kernels
JP5833499B2 (ja) * 2012-05-29 2015-12-16 Kddi株式会社 高次元の特徴ベクトル集合で表現されるコンテンツを高精度で検索する検索装置及びプログラム
US9202178B2 (en) * 2014-03-11 2015-12-01 Sas Institute Inc. Computerized cluster analysis framework for decorrelated cluster identification in datasets
CN104778242B (zh) * 2015-04-09 2018-07-13 复旦大学 基于图像动态分割的手绘草图图像检索方法及系统
CN106202189A (zh) * 2016-06-27 2016-12-07 乐视控股(北京)有限公司 一种图像搜索方法及装置
US10013765B2 (en) * 2016-08-19 2018-07-03 Mitsubishi Electric Research Laboratories, Inc. Method and system for image registrations
JP7095953B2 (ja) * 2017-01-19 2022-07-05 株式会社大林組 画像管理システム、画像管理方法、及び画像管理プログラム
JP6962747B2 (ja) * 2017-08-30 2021-11-05 株式会社日立製作所 データ合成装置および方法
CN107895028B (zh) * 2017-11-17 2019-11-29 天津大学 采用深度学习的草图检索方法
JP2018055730A (ja) * 2018-01-11 2018-04-05 オリンパス株式会社 画像検索装置および画像検索方法
CN108334627B (zh) * 2018-02-12 2022-09-23 北京百度网讯科技有限公司 新媒体内容的搜索方法、装置和计算机设备
CN109033308A (zh) * 2018-07-16 2018-12-18 安徽江淮汽车集团股份有限公司 一种图像检索方法及装置
US11093560B2 (en) * 2018-09-21 2021-08-17 Microsoft Technology Licensing, Llc Stacked cross-modal matching
CN109408655A (zh) * 2018-10-19 2019-03-01 中国石油大学(华东) 结合带孔卷积与多尺度感知网络的手绘草图检索方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339306A (zh) * 2010-08-31 2012-02-01 微软公司 基于草图的图像搜索
US20120072410A1 (en) * 2010-09-16 2012-03-22 Microsoft Corporation Image Search by Interactive Sketching and Tagging
CN105718531A (zh) * 2016-01-14 2016-06-29 广州市万联信息科技有限公司 图像数据库的建立方法及图像识别方法
CN108595636A (zh) * 2018-04-25 2018-09-28 复旦大学 基于深度跨模态相关性学习的手绘草图的图像检索方法
CN109145140A (zh) * 2018-09-08 2019-01-04 中山大学 一种基于手绘轮廓图匹配的图像检索方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098721A (zh) * 2022-08-23 2022-09-23 浙江大华技术股份有限公司 一种人脸特征检索方法、装置及电子设备
CN115098721B (zh) * 2022-08-23 2022-11-01 浙江大华技术股份有限公司 一种人脸特征检索方法、装置及电子设备

Also Published As

Publication number Publication date
JP2022541832A (ja) 2022-09-27
CN112307243A (zh) 2021-02-02
KR20220018633A (ko) 2022-02-15
CN112307243B (zh) 2023-11-03
US20220292132A1 (en) 2022-09-15

Similar Documents

Publication Publication Date Title
JP7009433B2 (ja) ニューラルネットワーク生成用の方法及び装置
US11030522B2 (en) Reducing the size of a neural network through reduction of the weight matrices
WO2020182122A1 (zh) 用于生成文本匹配模型的方法和装置
WO2022022152A1 (zh) 视频片段定位方法、装置、计算机设备及存储介质
JP2022058915A (ja) 画像認識モデルをトレーニングするための方法および装置、画像を認識するための方法および装置、電子機器、記憶媒体、並びにコンピュータプログラム
EP4002161A1 (en) Image retrieval method and apparatus, storage medium, and device
CN111666416B (zh) 用于生成语义匹配模型的方法和装置
CN112149699B (zh) 用于生成模型的方法、装置和用于识别图像的方法、装置
WO2022253061A1 (zh) 一种语音处理方法及相关设备
WO2023005386A1 (zh) 模型训练方法和装置
CN111831855B (zh) 用于匹配视频的方法、装置、电子设备和介质
WO2022247562A1 (zh) 多模态数据检索方法、装置、介质及电子设备
CN110263218B (zh) 视频描述文本生成方法、装置、设备和介质
CN114329029B (zh) 对象检索方法、装置、设备及计算机存储介质
CN113033580B (zh) 图像处理方法、装置、存储介质及电子设备
WO2021012691A1 (zh) 用于检索图像的方法和装置
US11763204B2 (en) Method and apparatus for training item coding model
WO2023143016A1 (zh) 特征提取模型的生成方法、图像特征提取方法和装置
CN113591490B (zh) 信息处理方法、装置和电子设备
CN114420135A (zh) 基于注意力机制的声纹识别方法及装置
CN109670111B (zh) 用于推送信息的方法和装置
CN111563159B (zh) 文本排序方法及装置
CN113377986B (zh) 图像检索方法和装置
CN111311616B (zh) 用于分割图像的方法和装置
US20230111978A1 (en) Cross-example softmax and/or cross-example negative mining

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844725

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022504246

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20227003264

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 20844725

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.08.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20844725

Country of ref document: EP

Kind code of ref document: A1