CN110110772A - Determine the method, apparatus and computer readable storage medium of image tag accuracy - Google Patents
Determine the method, apparatus and computer readable storage medium of image tag accuracy Download PDFInfo
- Publication number
- CN110110772A CN110110772A CN201910340159.0A CN201910340159A CN110110772A CN 110110772 A CN110110772 A CN 110110772A CN 201910340159 A CN201910340159 A CN 201910340159A CN 110110772 A CN110110772 A CN 110110772A
- Authority
- CN
- China
- Prior art keywords
- feature vector
- vector
- label
- accuracy
- input picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The disclosure is directed to the method, apparatus and computer readable storage medium of a kind of determining image tag accuracy, belong to field of image processing, it can obtain the accuracy of the corresponding each label of input picture, the effective correlation and accuracy for promoting searching order, it is effectively removed influence of the error label to search result, and reduces the sequence cis-position of the low label of accuracy.This method comprises: extracting the image feature vector of input picture;The word for calculating each label corresponding with the input picture is embedded in vector;Described image feature vector and each institute's predicate insertion vector are spliced respectively, obtain union feature vector;And the accuracy of each label is calculated based on the union feature vector.
Description
Technical field
This disclosure relates to field of image processing more particularly to a kind of method, apparatus and meter of determining image tag accuracy
Calculation machine readable storage medium storing program for executing.
Background technique
Common picture search function scans for image generally by keyword (namely image tag).Related skill
In art, in image data base, every image has its corresponding several label, these labels generally pass through the upper blit of user
As when subsidiary label automatically generated to automatically generate, or in image data base by image recognition technology.With Fig. 1 institute
For the image shown.Subsidiary label or can using the label that image recognition technology automatically generates when uploading image using user
It can include sky, sea, steamer, harbour and seabird, wherein first four label is correct label, the last one label " seabird " is
Error label.Moreover, because there are the weight of some mistakes and each labels being identical in label, so will affect subsequent
Picture search sequence accuracy.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provides a kind of method of determining image tag accuracy, dress
It sets and computer readable storage medium.
According to the first aspect of the embodiments of the present disclosure, a kind of method of determining image tag accuracy is provided, comprising:
Extract the image feature vector of input picture;
The word for calculating each label corresponding with the input picture is embedded in vector;
Described image feature vector and each institute's predicate insertion vector are spliced respectively, obtain union feature vector;
And
The accuracy of each label is calculated based on the union feature vector.
Optionally, the image feature vector for extracting input picture includes: described defeated using convolutional neural networks extraction
Enter the image feature vector of image.
Optionally, the word insertion vector for calculating each label corresponding with the input picture includes: to pass through
The word that word2vec model calculates each label corresponding with the input picture is embedded in vector.
Optionally, the accuracy that each label is calculated based on the union feature vector, comprising: utilize multilayer
Perceptron is simultaneously based on the union feature vector, calculates the accuracy of each label.
According to second embodiment of the present disclosure, a kind of device of determining image tag accuracy is provided, comprising:
Image feature vector extraction module, for extracting the image feature vector of input picture;
Word be embedded in vector calculation module, the word for calculating each label corresponding with the input picture be embedded in
Amount;
Splicing module is obtained for splicing described image feature vector and each institute's predicate insertion vector respectively
Union feature vector;And
Accuracy computing module, for calculating the accuracy of each label based on the union feature vector.
Optionally, described image characteristic vector pickup module includes: image feature vector extracting sub-module, for utilizing volume
Product neural network extracts the image feature vector of the input picture.
Optionally, institute's predicate insertion vector calculation module includes: word insertion vector computational submodule, for passing through
The word that word2vec model calculates each label corresponding with the input picture is embedded in vector.
Optionally, the accuracy computing module includes: accuracy computational submodule, for utilizing multilayer perceptron and base
In the union feature vector, the accuracy of each label is calculated.
According to third embodiment of the present disclosure, a kind of device of determining image tag accuracy is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Extract the image feature vector of input picture;
The word for calculating each label corresponding with the input picture is embedded in vector;
Described image feature vector and each institute's predicate insertion vector are spliced respectively, obtain union feature vector;
And
The accuracy of each label is calculated based on the union feature vector.
According to a fourth aspect of embodiments of the present disclosure, a kind of computer readable storage medium is provided, calculating is stored thereon with
Machine program instruction is realized when the program instruction is executed by processor and determines that image tag is quasi- provided by the first embodiment of the present disclosure
The step of method of exactness.
By using above-mentioned technical proposal, due to can be by the every of the image feature vector of input picture and the input picture
The word insertion vector of a label is spliced to obtain union feature vector, is then based on the union feature vector and is calculated each institute
State the accuracy of label, also can by the text information (such as label) of image and image information (for example, characteristics of image to
Amount) combine and is analyzed to obtain the accuracy of the corresponding each label of input picture, each mark of such input picture
The weight of label is just different, to can effectively promote the correlation and standard of searching order when doing picture search sequence
Exactness is effectively removed influence of the error label to search result, and reduces the sequence cis-position of the low label of accuracy.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 shows a kind of illustrative image.
Fig. 2 is a kind of flow chart of the method for determining image tag accuracy shown according to an exemplary embodiment.
Fig. 3 is a kind of block diagram of the device of determining image tag accuracy shown according to an exemplary embodiment.
Fig. 4 is a kind of block diagram of the device of determining image tag accuracy shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Fig. 2 is a kind of flow chart of the method for determining image tag accuracy shown according to an exemplary embodiment, such as
Shown in Fig. 2, this method is for including the following steps in terminal.
In the step s 21, the image feature vector of input picture is extracted.
Every piece image all has the unique characteristics that can be different from other class images, some can be arrived with direct feel
Physical feature, such as brightness, edge, texture and color;Some then be need by transformation or processing just it is getable, as square,
Histogram and main composition etc..When being identified to image, it will usually which the multiple or multifrequency nature combination of image object exists
Together, an image feature vector is formed to represent the image object, if only single number feature, image feature vector
It is then the image feature vector of n dimension if it is the combination of n characteristic for an one-dimensional vector.With image shown in FIG. 1
For, the image feature vector extracted can be such as { A, B, C, D }.It will be apparent to a skilled person that here
Image feature vector be only example.
In step S22, the word for calculating each label corresponding with the input picture is embedded in vector.
Still by taking image shown in FIG. 1 as an example.Label corresponding to the image includes sky, sea, steamer, harbour and sea
Bird.Then in this step, can calculate each label word insertion vector, also will each label word expression-form conversion
At the expression-form of vector.For example, being computed, the word insertion vector of label " sky " is { E, F }, and the word of label " sea " is embedded in
Vector is { G, H, I }, and the word insertion vector of label " steamer " is { J, K }, and the word insertion vector of label " harbour " is { L, M, N },
The word insertion vector of label " seabird " is { O, P }.It will be apparent to a skilled person that word insertion vector here is only
Example.
In step S23, described image feature vector and each institute's predicate insertion vector are spliced respectively, joined
Close feature vector.
Here splicing refers to for image feature vector and word insertion vector being connected together.Still with figure shown in FIG. 1
As for.In this step, by image feature vector respectively with label " sky ", " sea ", " steamer ", " harbour ", " seabird "
Word insertion vector spliced, obtain following five union feature vectors: { A, B, C, D, E, F }, { A, B, C, D, G, H, I },
{ A, B, C, D, J, K }, { A, B, C, D, L, M, N }, { A, B, C, D, O, P }.
In step s 24, the accuracy of each label is calculated based on the union feature vector.
By using above-mentioned technical proposal, due to can be by the every of the image feature vector of input picture and the input picture
The word insertion vector of a label is spliced to obtain union feature vector, is then based on the union feature vector and is calculated each institute
State the accuracy of label, also can by the text information (such as label) of image and image information (for example, characteristics of image to
Amount) combine and is analyzed to obtain the accuracy of the corresponding each label of input picture, each mark of such input picture
The weight of label is just different, to can effectively promote the correlation and standard of searching order when doing picture search sequence
Exactness is effectively removed influence of the error label to search result, and reduces the sequence cis-position of the low label of accuracy.
In a kind of possible embodiment, the image feature vector that input picture is extracted described in step S21 can be with
It include: the image that the input picture is extracted using convolutional neural networks (convolutional neural network, CNN)
Feature vector.Wherein it is possible to CNN network is trained on ImageNet data set first, then by CNN after training
Network is used as image characteristics extraction device to extract the image feature vector of input picture.In addition, in addition to CNN network, it can be with benefit
Image feature vector is extracted with gabor, hog etc..
In a kind of possible embodiment, calculating described in step S22 is corresponding with the input picture each
The word insertion vector of label may include: to calculate each label corresponding with the input picture by word2vec model
Word is embedded in vector.Wherein, word2vec model is the good model of pre-training, and word can be converted into the expression-form of vector.
It will be apparent to a skilled person that word2vec here is only example.In fact, any can be converted into word
The tool of vector may serve to calculate the word insertion vector of each label.
In a kind of possible embodiment, each institute is calculated based on the union feature vector described in step S24
The accuracy for stating label may include: using multilayer perceptron (multilayer perceptron, MLP) network and based on institute
Union feature vector is stated, the accuracy of each label is calculated.It will be apparent to a skilled person that accurate calculating
When spending, the disclosure is not limited to using MLP network, any degree of correlation namely label that can be calculated between image and its label
The tool of the accuracy of label can use, such as SVM, logistic regression etc. can be used.
Still by taking image shown in FIG. 1 as an example, each union feature vector obtained in step S23 is input to MLP net
In network, by the processing of MLP network, the accuracy of available each label, such as label " sky " is to image shown in FIG. 1
The accuracy being marked is 0.9, and label " sea " is 0.92 to the accuracy that image shown in FIG. 1 is marked, label " wheel
Ship " is 0.94 to the accuracy that image shown in FIG. 1 is marked, and image shown in FIG. 1 is marked in label " harbour "
Accuracy is 0.48, and label " seabird " is 0.03 to the accuracy that image shown in FIG. 1 is marked.
In addition, MLP network is trained by the data largely manually marked, that is, for each label, people
Work has collected many pictures, is then trained.Trained MLP network can be used to calculate the accurate of each label
The degree of correlation between each label of degree namely input picture corresponding thereto.
According to the another embodiment of the disclosure, a kind of device of determining image tag accuracy is provided, as shown in figure 3, should
Device includes: image feature vector extraction module 31, for extracting the image feature vector of input picture;Word is embedded in vector and calculates
Module 32, the word for calculating each label corresponding with the input picture are embedded in vector;Splicing module 33 is used for institute
It states image feature vector to be spliced respectively with each institute's predicate insertion vector, obtains union feature vector;And accuracy meter
Module 34 is calculated, for calculating the accuracy of each label based on the union feature vector.
By using above-mentioned technical proposal, since splicing module 33 can be defeated with this by the image feature vector of input picture
The word insertion vector for entering each label of image is spliced to obtain union feature vector, and then accuracy computing module 34 can
The accuracy of each label is calculated based on the union feature vector, the text information of image (such as can also be marked
Label) combine with image information (for example, image feature vector) and is analyzed to obtain the corresponding each label of input picture
Accuracy, the weight of each label of such input picture is just different, thus when doing picture search sequence, Ke Yiyou
The correlation and accuracy of the promotion searching order of effect, are effectively removed influence of the error label to search result, and reduce standard
The sequence cis-position of the low label of exactness.
In a kind of possible embodiment, described image characteristic vector pickup module 31 includes: that image feature vector mentions
Submodule is taken, for extracting the image feature vector of the input picture using convolutional neural networks.
In a kind of possible embodiment, institute's predicate insertion vector calculation module 32 includes: that word insertion vector calculates son
Module, the word for calculating each label corresponding with the input picture by word2vec model are embedded in vector.
In a kind of possible embodiment, the accuracy computing module 34 includes: accuracy computational submodule, is used for
Using multilayer perceptron and it is based on the union feature vector, calculates the accuracy of each label.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 4 is shown according to an exemplary embodiment a kind of for determining the frame of the device 400 of image tag accuracy
Figure.For example, device 400 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console put down
Panel device, Medical Devices, body-building equipment, personal digital assistant etc..
Referring to Fig. 4, device 400 may include following one or more components: processing component 402, memory 404, electric power
Component 406, multimedia component 408, audio component 410, the interface 412 of input/output (I/O), sensor module 414, and
Communication component 416.
The integrated operation of the usual control device 400 of processing component 402, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 402 may include that one or more processors 420 refer to execute
It enables, to complete all or part of the steps of the method for above-mentioned determination image tag accuracy.In addition, processing component 402 can be with
Including one or more modules, convenient for the interaction between processing component 402 and other assemblies.For example, processing component 402 can wrap
Multi-media module is included, to facilitate the interaction between multimedia component 408 and processing component 402.
Memory 404 is configured as storing various types of data to support the operation in device 400.These data are shown
Example includes the instruction of any application or method for operating on device 400, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 404 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Electric power assembly 406 provides electric power for the various assemblies of device 400.Electric power assembly 406 may include power management system
System, one or more power supplys and other with for device 400 generate, manage, and distribute the associated component of electric power.
Multimedia component 408 includes the screen of one output interface of offer between described device 400 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 408 includes a front camera and/or rear camera.When device 400 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 410 is configured as output and/or input audio signal.For example, audio component 410 includes a Mike
Wind (MIC), when device 400 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 404 or via communication set
Part 416 is sent.In some embodiments, audio component 410 further includes a loudspeaker, is used for output audio signal.
I/O interface 412 provides interface between processing component 402 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 414 includes one or more sensors, and the state for providing various aspects for device 400 is commented
Estimate.For example, sensor module 414 can detecte the state that opens/closes of device 400, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 400, and sensor module 414 can be with 400 1 components of detection device 400 or device
Position change, the existence or non-existence that user contacts with device 400,400 orientation of device or acceleration/deceleration and device 400
Temperature change.Sensor module 414 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 414 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 416 is configured to facilitate the communication of wired or wireless way between device 400 and other equipment.Device
400 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation
In example, communication component 416 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 416 further includes near-field communication (NFC) module, to promote short range communication.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 400 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, quasi- for executing above-mentioned determining image tag
The method of exactness.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 404 of instruction, above-metioned instruction can be executed by the processor 420 of device 400 to complete above-mentioned determining image mark
The method for signing accuracy.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory
(RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
Those skilled in the art will readily occur to other embodiment party of the disclosure after considering specification and practicing the disclosure
Case.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or adaptability
Variation follows the general principles of this disclosure and including the undocumented common knowledge or usual skill in the art of the disclosure
Art means.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following claim
It points out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.
Claims (10)
1. a kind of method of determining image tag accuracy characterized by comprising
Extract the image feature vector of input picture;
The word for calculating each label corresponding with the input picture is embedded in vector;
Described image feature vector and each institute's predicate insertion vector are spliced respectively, obtain union feature vector;And
The accuracy of each label is calculated based on the union feature vector.
2. the method according to claim 1, wherein the image feature vector for extracting input picture includes:
The image feature vector of the input picture is extracted using convolutional neural networks.
3. the method according to claim 1, wherein described calculate each mark corresponding with the input picture
The word of label is embedded in vector
Vector is embedded in by the word that word2vec model calculates each label corresponding with the input picture.
4. the method according to claim 1, wherein described each described based on union feature vector calculating
The accuracy of label, comprising:
Using multilayer perceptron and it is based on the union feature vector, calculates the accuracy of each label.
5. a kind of device of determining image tag accuracy characterized by comprising
Image feature vector extraction module, for extracting the image feature vector of input picture;
Word is embedded in vector calculation module, and the word for calculating each label corresponding with the input picture is embedded in vector;
Splicing module is combined for splicing described image feature vector and each institute's predicate insertion vector respectively
Feature vector;And
Accuracy computing module, for calculating the accuracy of each label based on the union feature vector.
6. device according to claim 5, which is characterized in that described image characteristic vector pickup module includes:
Image feature vector extracting sub-module, for extracted using convolutional neural networks the characteristics of image of the input picture to
Amount.
7. device according to claim 5, which is characterized in that institute's predicate is embedded in vector calculation module and includes:
Word is embedded in vector computational submodule, for calculating each mark corresponding with the input picture by word2vec model
The word of label is embedded in vector.
8. device according to claim 5, which is characterized in that the accuracy computing module includes:
Accuracy computational submodule, for calculating each mark using multilayer perceptron and based on the union feature vector
The accuracy of label.
9. a kind of device of determining image tag accuracy characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Extract the image feature vector of input picture;
The word for calculating each label corresponding with the input picture is embedded in vector;
Described image feature vector and each institute's predicate insertion vector are spliced respectively, obtain union feature vector;And
The accuracy of each label is calculated based on the union feature vector.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that the program instruction
The step of any one of Claims 1 to 4 the method is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910340159.0A CN110110772A (en) | 2019-04-25 | 2019-04-25 | Determine the method, apparatus and computer readable storage medium of image tag accuracy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910340159.0A CN110110772A (en) | 2019-04-25 | 2019-04-25 | Determine the method, apparatus and computer readable storage medium of image tag accuracy |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110110772A true CN110110772A (en) | 2019-08-09 |
Family
ID=67486806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910340159.0A Pending CN110110772A (en) | 2019-04-25 | 2019-04-25 | Determine the method, apparatus and computer readable storage medium of image tag accuracy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110110772A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085585A (en) * | 2016-02-12 | 2017-08-22 | 奥多比公司 | Accurate label dependency prediction for picture search |
CN107798624A (en) * | 2017-10-30 | 2018-03-13 | 北京航空航天大学 | A kind of technical label in software Ask-Answer Community recommends method |
CN107944447A (en) * | 2017-12-15 | 2018-04-20 | 北京小米移动软件有限公司 | Image classification method and device |
CN108491469A (en) * | 2018-03-07 | 2018-09-04 | 浙江大学 | Introduce the neural collaborative filtering conceptual description word proposed algorithm of concepts tab |
CN109002852A (en) * | 2018-07-11 | 2018-12-14 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer readable storage medium and computer equipment |
-
2019
- 2019-04-25 CN CN201910340159.0A patent/CN110110772A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085585A (en) * | 2016-02-12 | 2017-08-22 | 奥多比公司 | Accurate label dependency prediction for picture search |
CN107798624A (en) * | 2017-10-30 | 2018-03-13 | 北京航空航天大学 | A kind of technical label in software Ask-Answer Community recommends method |
CN107944447A (en) * | 2017-12-15 | 2018-04-20 | 北京小米移动软件有限公司 | Image classification method and device |
CN108491469A (en) * | 2018-03-07 | 2018-09-04 | 浙江大学 | Introduce the neural collaborative filtering conceptual description word proposed algorithm of concepts tab |
CN109002852A (en) * | 2018-07-11 | 2018-12-14 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer readable storage medium and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109117862B (en) | Image tag recognition methods, device and server | |
TWI747325B (en) | Target object matching method, target object matching device, electronic equipment and computer readable storage medium | |
CN104850828B (en) | Character recognition method and device | |
CN105631403B (en) | Face identification method and device | |
CN107944447B (en) | Image classification method and device | |
US10115019B2 (en) | Video categorization method and apparatus, and storage medium | |
RU2664003C2 (en) | Method and device for determining associate users | |
CN107239535A (en) | Similar pictures search method and device | |
CN104125396A (en) | Image shooting method and device | |
CN106485567B (en) | Article recommendation method and device | |
CN106557759B (en) | Signpost information acquisition method and device | |
CN107688781A (en) | Face identification method and device | |
CN105758319B (en) | The method and apparatus for measuring target object height by mobile terminal | |
CN106600530B (en) | Picture synthesis method and device | |
CN109961094B (en) | Sample acquisition method and device, electronic equipment and readable storage medium | |
CN108021897B (en) | Picture question and answer method and device | |
CN109040605A (en) | Shoot bootstrap technique, device and mobile terminal and storage medium | |
WO2020114236A1 (en) | Keypoint detection method and apparatus, electronic device, and storage medium | |
CN110781323A (en) | Method and device for determining label of multimedia resource, electronic equipment and storage medium | |
CN109360197A (en) | Processing method, device, electronic equipment and the storage medium of image | |
CN109034150B (en) | Image processing method and device | |
CN105100193B (en) | Cloud business card recommended method and device | |
CN111242303A (en) | Network training method and device, and image processing method and device | |
US20200135205A1 (en) | Input method, device, apparatus, and storage medium | |
CN110717399A (en) | Face recognition method and electronic terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190809 |