WO2019141042A1 - 图像分类方法、装置及终端 - Google Patents

图像分类方法、装置及终端 Download PDF

Info

Publication number
WO2019141042A1
WO2019141042A1 PCT/CN2018/122432 CN2018122432W WO2019141042A1 WO 2019141042 A1 WO2019141042 A1 WO 2019141042A1 CN 2018122432 W CN2018122432 W CN 2018122432W WO 2019141042 A1 WO2019141042 A1 WO 2019141042A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature vector
vector
classification
character
Prior art date
Application number
PCT/CN2018/122432
Other languages
English (en)
French (fr)
Inventor
张志伟
杨帆
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2019141042A1 publication Critical patent/WO2019141042A1/zh
Priority to US16/932,599 priority Critical patent/US11048983B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/158Segmentation of character regions using character size, text spacings or pitch estimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • G06V30/2528Combination of methods, e.g. classifiers, working on the same input data

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image classification method, apparatus, and terminal.
  • Deep learning has been widely used in video images, speech recognition, natural language processing and other related fields.
  • Convolutional neural network as an important branch of deep learning, has greatly improved the accuracy of its prediction results in computer vision tasks such as target detection and classification due to its superior fitting ability and end-to-end global optimization ability.
  • the label corresponding to the image is matched under the predetermined label system according to the characteristics of the image itself, and the classification of the image is determined according to the label, and the classification result is poorly accurate.
  • the user after uploading an image, the user also adds a simple text description to the image, and the text description has certain reference value for the classification of the image. It can be seen that how to obtain comprehensive information of an image and classify the image according to the obtained comprehensive information to improve the accuracy of the image classification is a problem to be solved by those skilled in the art.
  • the embodiment of the present application provides an image classification method, device, and terminal, so as to solve the problem that the accuracy of the image classification result is poor in the prior art.
  • an image classification method includes: determining, by a convolutional neural network, an image feature vector corresponding to an image; wherein the image corresponds to text description information; and processing the text description information by embedding the network
  • the character feature vector is obtained; the image feature vector and the character feature vector are spliced to obtain the graphic feature vector; and the image corresponding to the image feature vector, the character feature vector and the graphic feature vector is determined according to the depth neural network.
  • the step of processing the text description information by the embedded network to obtain the character feature vector includes: removing the stop words in the text description information to obtain a plurality of word segments; and determining, for each word segmentation, the word segmentation in the text feature The position information in the set; the index value corresponding to the word segment is generated according to the position information; wherein the character feature set is trained by the text description information corresponding to the sample image; the embedded network is invoked, and the embedded network determines the word segment according to the index value corresponding to each participle Corresponding description vector; weighting and averaging the description vectors corresponding to each participle to obtain a character feature vector.
  • the step of splicing the image feature vector and the character feature vector to obtain the graphic feature vector includes: mapping the character feature vector and the image feature vector to a vector having the same dimension; and mapping the mapped character feature vector and The image feature vector is dimensioned to generate a graphic feature vector.
  • the method before the step of determining an image feature vector corresponding to the image by the convolutional neural network, the method further comprises: acquiring each sample image; determining, for each sample image, whether the sample image corresponds to the text description information; Determining that the text feature subset corresponding to the sample image is empty; if yes, removing the stop word in the text description information to obtain a description set containing multiple word segments; extracting the text feature subset from the description set based on the preset label system; The subset of character features corresponding to each sample image is summed to obtain a set of text features.
  • the step of determining the classification corresponding to the image according to the processing result of the image feature vector, the character feature vector, and the graphic feature vector by the depth neural network includes: respectively, the image feature vector, the character feature vector, and the graphic
  • the feature vector is input to the depth neural network, and the first classification result vector corresponding to the image feature vector, the second classification result vector corresponding to the text feature vector, and the third classification result vector corresponding to the graphic feature vector are obtained; the first classification result is obtained
  • the vector, the second classification result vector and the third classification result vector are weighted and summed to obtain a target result vector; according to the target result vector, the classification corresponding to the image is determined.
  • an image classification apparatus comprising: a determination module configured to determine an image feature vector corresponding to an image by a convolutional neural network; wherein the image corresponds to text description information; vector generation The module is configured to process the text description information by embedding the network to obtain a character feature vector; the splicing module is configured to splicing the image feature vector and the character feature vector to obtain a graphic feature vector; the classification module is configured to be according to the depth The neural network processes the image feature vector, the character feature vector, and the graphic feature vector to determine the classification corresponding to the image.
  • the vector generation module includes: a word segmentation sub-module configured to remove the stop word in the text description information to obtain a plurality of word segments; and a position determination sub-module configured to determine the word segmentation in the text for each word segmentation Position information in the feature set; the index value generating sub-module is configured to generate an index value corresponding to the word segment according to the position information; wherein the character feature set is trained by the text description information corresponding to the sample image; the first calling sub-module is The method is configured to invoke the embedded network, and the embedded network determines the description vector corresponding to each participle according to the index value corresponding to each participle; the second calling sub-module is configured to weight-average the description vector corresponding to each participle to obtain a character feature vector.
  • the splicing module includes: a mapping sub-module configured to map the character feature vector and the image feature vector to a vector of the same dimension; the splicing sub-module configured to map the mapped character feature vector and the image feature The vector is dimensioned to generate a graphic feature vector.
  • the apparatus further includes: an acquisition module configured to acquire each sample image before the determining module determines the image feature vector corresponding to the image by the convolutional neural network; the subset determination module is configured to a sample image, determining whether the sample image corresponds to the text description information; if not, determining that the text feature subset corresponding to the sample image is empty; if yes, removing the stop word in the text description information to obtain a description set including a plurality of word segments; and extracting the submodule And configured to extract a text feature subset from the description set based on the preset label system; the feature set determining module is configured to combine the text feature subsets corresponding to the sample images to obtain a text feature set.
  • an acquisition module configured to acquire each sample image before the determining module determines the image feature vector corresponding to the image by the convolutional neural network
  • the subset determination module is configured to a sample image, determining whether the sample image corresponds to the text description information; if not, determining that the text feature subset corresponding to the
  • the classification module includes: an input sub-module configured to input the image feature vector, the character feature vector, and the graphic feature vector into the depth neural network, respectively, to obtain a first classification result vector corresponding to the image feature vector, and a second classification result vector corresponding to the character feature vector, a third classification result vector corresponding to the graphic feature vector; and a processing submodule configured to perform the first classification result vector, the second classification result vector, and the third classification result vector
  • the result determination sub-module is configured to determine the classification corresponding to the image according to the target result vector.
  • a terminal including: a memory, a processor, and an image classification program stored on the memory and operable on the processor, where the image classification program is executed by the processor to implement the present application.
  • the steps of any image classification method including: a memory, a processor, and an image classification program stored on the memory and operable on the processor, where the image classification program is executed by the processor to implement the present application.
  • a computer readable storage medium stores an image classification program, and when the image classification program is executed by the processor, the image classification method of any one of the present applications is implemented. step.
  • the image classification scheme provided by the embodiment of the present invention obtains an image feature vector corresponding to the image based on the convolutional neural network as the backbone network for image feature extraction, and obtains the text corresponding to the image based on the embedded network as the text feature extraction. Describe the character feature vector of the information, and combine the image feature vector and the character feature vector to obtain the graphic feature vector.
  • the deep neural network is used as the backbone network, and the image is determined under different labels according to the image feature vector, the character feature vector and the graphic feature vector. The weight thus determines the classification corresponding to the image, which can improve the accuracy of image classification.
  • FIG. 1 is a flow chart showing the steps of an image classification method according to Embodiment 1 of the present application.
  • FIG. 2 is a flow chart showing the steps of an image classification method according to Embodiment 2 of the present application.
  • FIG. 3 is a structural block diagram of an image classification apparatus according to Embodiment 3 of the present application.
  • FIG. 4 is a structural block diagram of a terminal according to Embodiment 4 of the present application.
  • FIG. 1 a flow chart of steps of an image classification method according to Embodiment 1 of the present application is shown.
  • Step 101 Determine an image feature vector corresponding to the image by using a convolutional neural network.
  • the image corresponds to text description information.
  • the text description information may be an additional text description information uploaded by the user after uploading the image, or may be a text description information included in the image.
  • the image may be a single frame image in the video, or may be only one multimedia image.
  • An image is input into the convolutional neural network. After the convolutional layer or the pooled layer, an image feature map vector is obtained.
  • the image feature vector includes a plurality of points, each point corresponding to a feature map and a weight value.
  • Step 102 Processing the text description information by using an embedded network to obtain a character feature vector.
  • the text information is segmented to obtain a plurality of word segments, and the description vector corresponding to each word segment is determined based on the preset character feature set, and finally the description vector corresponding to each word segment is weighted by the same dimension. On average, get the text feature vector.
  • the obtained character feature vector includes a plurality of points, and each point corresponds to a character feature in a set of character features.
  • step 102 is not limited to being executed after step 101, and may be performed in parallel with step 101 or before step 101.
  • Step 103 splicing the image feature vector and the character feature vector to obtain a graphic feature vector.
  • the image feature vector and the character feature vector respectively comprise a plurality of dimensions, each dimension corresponding to a point on the vector, and the two feature vectors are spliced to obtain the dimensions of the image feature vector and the text feature can be extracted.
  • the image feature vector contains ten dimensions, that is, ten points
  • the text feature vector contains ten dimensions
  • the stitched graphic feature vector contains twenty dimensions.
  • Step 104 Determine the classification corresponding to the image according to the processing result of the image feature vector, the character feature vector, and the graphic feature vector by the depth neural network.
  • the depth neural network respectively determines the probability values corresponding to the points in the image feature vector, the character feature vector and the graphic feature vector, and obtains three classification result vectors.
  • the three classification result vectors are weighted and averaged to obtain a target result vector.
  • the feature tag corresponding to the point with the highest probability value is determined as the tag of the image from the target result vector, and the tag belongs to the tag to determine the classification to which the image belongs.
  • the tag can also be directly used as the classification to which the image belongs.
  • the image classification method provided by the embodiment of the present invention obtains an image feature vector corresponding to the image based on the convolutional neural network as the backbone network for image feature extraction, and obtains the text corresponding to the image based on the embedded network as the text feature extraction embedded network. Describe the character feature vector of the information, and combine the image feature vector and the character feature vector to obtain the graphic feature vector.
  • the deep neural network is used as the backbone network, and the image is determined under different labels according to the image feature vector, the character feature vector and the graphic feature vector. The weight thus determines the classification corresponding to the image, which can improve the accuracy of image classification.
  • FIG. 2 a flow chart of steps of an image classification method according to Embodiment 2 of the present application is shown.
  • Step 201 Determine an image feature vector corresponding to the image by using a convolutional neural network.
  • the image corresponds to text description information.
  • the text description information may be an additional text description information uploaded by the user after uploading the image, or may be a text description information included in the image.
  • Step 202 Remove the stop words in the text description information to obtain a plurality of word segments.
  • the stop word table is pre-set in the system.
  • the phrase in the text description information is respectively matched with the stop word list. If the match is successful, the phrase is determined as a stop word to describe it from the text.
  • the information is removed, and finally the stop words in the text description information are removed to obtain a plurality of word segments.
  • the stop word is a word with no actual meaning, and the stop word list can be set by a person skilled in the art according to actual needs, which is not specifically limited in the embodiment of the present application.
  • Step 203 For each participle, determine location information of the word segment in the character feature set, and generate an index value corresponding to the word segment according to the location information.
  • the character feature set is trained by the text description information corresponding to the sample image, and a way of training the text description information is as follows:
  • the sample image may correspond to text description information or no corresponding text description information.
  • the number of sample images and the selection may be set by a person skilled in the art according to actual needs, which is not specifically limited in the embodiment of the present application. The more the number of samples, the more comprehensive the text features contained in the trained set of text features.
  • determining whether the sample image corresponds to the text description information if not, determining that the text feature subset corresponding to the sample image is empty; if yes, removing the stop word in the text description information to obtain a description including multiple word segments a collection; wherein the subset of text features corresponding to the single sample image may be represented by S u ; extracting a subset of the text features from the description set based on the preset label system, wherein the subset of the text features corresponding to the single sample image may be used by S i means that the empty set is represented by null.
  • Step 204 Invoking an embedded network, and determining, by the embedded network, a description vector corresponding to each participle according to an index value corresponding to each participle.
  • the character feature set includes a plurality of character features, and each character feature corresponds to a position in the character feature set, and each position corresponds to an index value.
  • the position label can be used as an index value.
  • the index value corresponding to each participle is input into the network, and the embedded network determines the description vector W i corresponding to each participle according to the index value corresponding to each participle.
  • Step 205 Weighting and averaging the description vectors corresponding to each participle to obtain a character feature vector.
  • a plurality of word segments are obtained by segmenting the text information of the image to be predicted. For each word segment, it is determined whether the word segment is included in the pre-trained character feature set, and if so, the description vector corresponding to the word segment is further determined; otherwise, the word segment has no corresponding description vector. That is, when the participle is a description tag, a description vector is generated for the participle, and conversely, if the participle is not a description tag, a description vector is not generated for the participle.
  • the weights corresponding to each participle may be the same or different. If the weights corresponding to each participle are the same, the description vector corresponding to each participle may be weighted and averaged by the following formula to obtain a character feature vector.
  • F text is a character feature vector
  • N is a number of labels in the text information included in the current image.
  • Step 206 Map the character feature vector and the image feature vector to a vector of the same dimension.
  • the two feature vectors need to be spatially mapped separately, so that It maps to the same space, that is, maps to the same dimension.
  • the character feature vector and the feature in the image feature vector may be spatially mapped using a full connection.
  • Step 207 Perform latitude splicing on the mapped character feature vector and the image feature vector to generate a graphic feature vector.
  • the mapped character feature vector contains 1-5 five dimensions
  • the image feature vector contains 1-5 five dimensions.
  • the first dimension of the image feature vector can be stitched to the fifth dimension of the text feature vector to generate A graphic feature vector consisting of ten dimensions, each dimension in the graphic feature vector corresponding to the feature tag.
  • Step 208 Determine the classification corresponding to the image according to the processing result of the image feature vector, the character feature vector, and the graphic feature vector by the depth neural network.
  • the image feature vector, the character feature vector and the graphic feature vector may be respectively input into the depth neural network to obtain a first classification result vector corresponding to the image feature vector, and a second classification result vector corresponding to the character feature vector. a third classification result vector corresponding to the graphic feature vector.
  • Each classification result vector includes a plurality of points, each point corresponding to one feature tag, and each point corresponds to a probability value.
  • the first classification result vector, the second classification result vector, and the third classification result vector are weighted and summed to obtain a target result vector.
  • the target result vector P can be obtained by the following formula:
  • W image , W text and W text-image are weights of the first classification result vector, the second classification result vector and the third classification result vector respectively;
  • P image , P text and P text-image are respectively the first classification result The vector, the second classification result vector, and the third classification result vector.
  • the classification corresponding to the image is determined.
  • the feature tag corresponding to the image is determined according to the target result vector, wherein the feature tag is a feature tag corresponding to the point with the highest probability value in the target feature vector, and the classification to which the image belongs is determined according to the feature tag.
  • the image classification method provided by the embodiment of the present invention obtains an image feature vector corresponding to the image based on the convolutional neural network as the backbone network for image feature extraction, and obtains the text corresponding to the image based on the embedded network as the text feature extraction embedded network.
  • the character feature vector of the description information is obtained by splicing the image feature vector and the character feature vector to obtain the graphic feature vector, and using the deep neural network as the backbone network, determining the image under different labels according to the image feature vector, the character feature vector and the graphic feature vector. The weight thus determines the classification corresponding to the image, which can improve the accuracy of image classification.
  • FIG. 3 a block diagram of a structure of an image classification apparatus according to Embodiment 3 of the present application is shown.
  • the image classification device of the embodiment of the present application may include: a determining module 301 configured to determine an image feature vector corresponding to the image by using a convolutional neural network; wherein the image corresponds to text description information; and the vector generation module 302 is configured to be embedded by The network processes the text description information to obtain a character feature vector; the splicing module 303 is configured to splicing the image feature vector and the character feature vector to obtain a graphic feature vector; and the classification module 304 is configured to image features according to the depth neural network. The processing results of the vector, the character feature vector, and the graphic feature vector determine the classification corresponding to the image.
  • the vector generation module 302 can include a word segmentation sub-module 3021 configured to remove the stop words in the text description information to obtain a plurality of word segments, and a location determination sub-module 3022 configured to, for each word segmentation, Determining the location information of the word segmentation in the character feature set, the index value generation sub-module 3023 is configured to generate the index value corresponding to the word segment according to the location information; wherein the text feature set is trained by the text description information corresponding to the sample image;
  • the calling sub-module 3024 is configured to invoke the embedded network, and the embedded network determines the description vector corresponding to each participle according to the index value corresponding to each participle; the second calling sub-module 3025 is configured to weight the description vector corresponding to each participle by the same dimension. On average, get the text feature vector.
  • the tiling module 303 can include: a mapping sub-module 3031 configured to map the character feature vector and the image feature vector to a vector of the same dimension; the splicing sub-module 3032 configured to map the mapped character The vector and the image feature vector are dimensioned to generate a graphic feature vector.
  • the apparatus may further include: an obtaining module 305 configured to acquire each sample image before the determining module 301 determines an image feature vector corresponding to the image by the convolutional neural network; the subset determining module 306 is configured to Determining, for each sample image, whether the sample image corresponds to the text description information; if not, determining that the text feature subset corresponding to the sample image is empty; if yes, removing the stop word in the text description information to obtain a description set including multiple word segments; The extracting module 307 is configured to extract a text feature subset from the description set based on the preset label system. The feature set determining module 308 is configured to combine the text feature subsets corresponding to the sample images to obtain the text feature. set.
  • the classification module 304 can include an input sub-module 3041 configured to input an image feature vector, a character feature vector, and a graphic feature vector into the depth neural network, respectively, to obtain a first classification result corresponding to the image feature vector. a vector, a second classification result vector corresponding to the character feature vector, and a third classification result vector corresponding to the graphic feature vector; the processing sub-module 3042 is configured to configure the first classification result vector, the second classification result vector, and the third The classification result vector is weighted and summed to obtain a target result vector; the result determination sub-module 3043 is configured to determine the classification corresponding to the image according to the target result vector.
  • the image classification device of the embodiment of the present application is used to implement the corresponding image classification method in the first embodiment and the second embodiment, and has the beneficial effects corresponding to the method embodiment, and details are not described herein again.
  • FIG. 4 a structural block diagram of a terminal for image classification according to Embodiment 4 of the present application is shown.
  • the terminal of the embodiment of the present application may include: a memory, a processor, and an image classification program stored on the memory and operable on the processor, and the image label determining program is executed by the processor to implement any image classification method in the present application. A step of.
  • FIG. 4 is a block diagram of an image classification terminal 600, according to an exemplary embodiment.
  • terminal 600 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • terminal 600 can include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, And a communication component 616.
  • processing component 602 memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, And a communication component 616.
  • Processing component 602 typically controls the overall operation of device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 602 can include one or more processors 620 to execute instructions to perform all or part of the steps of the above described methods.
  • processing component 602 can include one or more modules to facilitate interaction between component 602 and other components.
  • processing component 602 can include a multimedia module to facilitate interaction between multimedia component 608 and processing component 602.
  • Memory 604 is configured to store various types of data to support operation at terminal 600. Examples of such data include instructions for any application or method operating on terminal 600, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 604 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • Power component 606 provides power to various components of terminal 600.
  • Power component 606 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for terminal 600.
  • the multimedia component 608 includes a screen that provides an output interface between the terminal 600 and the user.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor can sense not only the boundaries of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 608 includes a front camera and/or a rear camera. When the terminal 600 is in an operation mode such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 610 is configured to output and/or input an audio signal.
  • the audio component 610 includes a microphone (MIC) that is configured to receive an external audio signal when the terminal 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 604 or transmitted via communication component 616.
  • audio component 610 also includes a speaker for outputting an audio signal.
  • the I/O interface 612 provides an interface between the processing component 602 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 614 includes one or more sensors for providing terminal 600 with various aspects of status assessment.
  • sensor component 614 can detect an open/closed state of terminal 600, a relative positioning of components, such as a display and a keypad of terminal 600, and sensor component 614 can also detect a change in position of a component of terminal 600 or terminal 600, the user The presence or absence of contact with the terminal 600, the orientation or acceleration/deceleration of the device 600 and the temperature change of the terminal 600.
  • Sensor assembly 614 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 614 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 616 is configured to facilitate wired or wireless communication between terminal 600 and other devices.
  • the terminal 600 can access a wireless network based on a communication standard such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 616 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • communication component 616 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • terminal 600 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing image classification methods, in particular image classification methods including:
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing image classification methods, in particular image classification methods including:
  • the image feature vector corresponding to the image is determined by the convolutional neural network; wherein the image corresponds to the text description information; the text description information is processed by the embedded network to obtain the character feature vector; the image feature vector and the character feature vector are spliced to obtain the graphic and text.
  • the feature vector is determined according to the processing result of the image feature vector, the character feature vector and the graphic feature vector by the depth neural network, and the classification corresponding to the image is determined.
  • the step of processing the text description information by the embedded network to obtain the character feature vector includes: removing the stop words in the text description information to obtain a plurality of word segments; and determining, for each word segmentation, the word segmentation in the text feature The position information in the set; the index value corresponding to the word segment is generated according to the position information; wherein the character feature set is trained by the text description information corresponding to the sample image; the embedded network is invoked, and the embedded network determines the word segment according to the index value corresponding to each participle Corresponding description vector; weighting and averaging the description vectors corresponding to each participle to obtain a character feature vector.
  • the step of splicing the image feature vector and the character feature vector to obtain the graphic feature vector includes: mapping the character feature vector and the image feature vector to a vector having the same dimension; and mapping the mapped character feature vector and The image feature vector is dimensioned to generate a graphic feature vector.
  • the method before the step of determining an image feature vector corresponding to the image by the convolutional neural network, the method further comprises: acquiring each sample image; determining, for each sample image, whether the sample image corresponds to the text description information; Determining that the text feature subset corresponding to the sample image is empty; if yes, removing the stop word in the text description information to obtain a description set containing multiple word segments; extracting the text feature subset from the description set based on the preset label system; The subset of character features corresponding to each sample image is summed to obtain a set of text features.
  • the step of determining the classification corresponding to the image according to the processing result of the image feature vector, the character feature vector, and the graphic feature vector by the depth neural network includes: respectively, the image feature vector, the character feature vector, and the graphic feature
  • the vector input depth neural network obtains a first classification result vector corresponding to the image feature vector, a second classification result vector corresponding to the text feature vector, and a third classification result vector corresponding to the graphic feature vector; and the first classification result vector And the second classification result vector and the third classification result vector are weighted and summed to obtain a target result vector; according to the target result vector, the classification corresponding to the image is determined.
  • a non-transitory computer readable storage medium comprising instructions, such as a memory 604 comprising instructions executable by processor 620 of terminal 600 to perform the image classification method described above.
  • the non-transitory computer readable storage medium can be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • the terminal provided by the embodiment of the present invention obtains an image feature vector corresponding to the image based on the convolutional neural network as the backbone network for image feature extraction, and obtains the text description information corresponding to the image based on the embedded network as the text feature extraction backbone network.
  • the character feature vector is obtained by splicing the image feature vector and the character feature vector to obtain the graphic feature vector.
  • the deep neural network is used as the backbone network, and the weight of the image under different labels is determined according to the image feature vector, the character feature vector and the graphic feature vector. Determining the classification of the image can improve the accuracy of the image classification.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • modules in the devices of the embodiments can be adaptively changed and placed in one or more devices different from the embodiment.
  • the modules or units or components of the embodiments may be combined into one module or unit or component, and further they may be divided into a plurality of sub-modules or sub-units or sub-components.
  • any combination of the features disclosed in the specification, including the accompanying claims, the abstract and the drawings, and any methods so disclosed, or All processes or units of the device are combined.
  • Each feature disclosed in this specification (including the accompanying claims, the abstract and the drawings) may be replaced by alternative features that provide the same, equivalent or similar purpose.
  • the various component embodiments of the present application can be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the components of the image classification scheme in accordance with embodiments of the present application.
  • the application can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • Such a program implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种图像分类方法、装置及终端,其中方法包括:通过卷积神经网络确定图像对应的图像特征向量(101);其中,图像对应有文字描述信息;通过嵌入网络对文字描述信息进行处理,得到文字特征向量(102);将图像特征向量和文字特征向量拼接,得到图文特征向量(103);根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类(104)。该图像分类方法,能够提升图像分类的准确性。

Description

图像分类方法、装置及终端
本申请要求于2018年01月19日提交中国专利局、申请号为201810055063.5、发明名称为“图像分类方法、装置及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种图像分类方法、装置及终端。
背景技术
深度学习在视频图像、语音识别、自然语言处理等相关领域得到了广泛应用。卷积神经网络作为深度学习的一个重要分支,由于其超强的拟合能力以及端到端的全局优化能力,使得其在目标检测、分类等计算机视觉任务中所得预测结果的精度大幅提升。
目前对图像进行分类时,依据图像自身的特征在预定的标签体系下匹配得到图像对应的标签,依据标签确定图像所属的分类,所得分类结果准确性差。而在实际应用场景中,用户在上传一个图像之后,还会为该图像追加一段简单的文字描述,而这段文字描述对图像的分类而言,也具有一定的参考价值。可见,如何获取图像的全面信息,依据所获取的全面信息对图像进行分类以提升图像分类的准确性,是目前本领域技术人员成为待解决的问题。
发明内容
本申请实施例提供一种图像分类方法、装置及终端,以解决现有技术中存在图像分类结果准确性差的问题。
依据本申请的一个方面,提供了一种图像分类方法,该方法包括:通过卷积神经网络确定图像对应的图像特征向量;其中,图像对应有文字描述信息;通过嵌入网络对文字描述信息进行处理,得到文字特征向量;将图像特征向量和文字特征向量拼接,得到图文特征向量;根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类。
在一些实施方式中,通过嵌入网络对文字描述信息进行处理,得到文字特征向量的步骤,包括:去除文字描述信息中的停用词,得到多个分词;针对每个分词,确定分词在文字特征集合中的位置信息;依据位置信息生成分词对应的索引数值;其中,文字特征集合通过对样本图像对应的文字描述信息训练得到;调用嵌入网络,由嵌入网络依据各分词对应的索引数值确定各分词对应的描述向量;将各分词对应的描述向量同维度加权平均,得到文字特征向量。
在一些实施方式中,将图像特征向量和文字特征向量拼接,得到图文特征向量的步骤,包括:将文字特征向量和图像特征向量,映射为维度相同的向量;将映射后的文字特征向量和图像特征向量进行维度拼接,生成图文特征向量。
在一些实施方式中,在通过卷积神经网络确定图像对应的图像特征向量的步骤之前,方法还包括:获取各样本图像;针对每个样本图像,确定样本图像是否对应文字描述信息;若否,确定样本图像对应的文字特征子集合为空;若是,去除文字描述信息中的停用词得到包含多个分词的描述集合;基于预设的标签体系从描述集合中提取出文字特征子集合;将各样本图像对应的文字特征子集合求并集,得到文字特征集合。
在一些实施方式中,该根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类的步骤,包括:分别将图像特征向量、文字特征向量以及图文特征向量输入深度神经网络,得到与图像特征向量对应的第一分类结果向量,与文字特征向量对应的第二分类结果向量,与图文特征向量对应的第三分类结果向量;将第一分类结果向量、第二分类结果向量以及第三分类结果向量进行加权求和,得到目标结果向量;依据目标结果向量,确定图像对应的分类。
依据本申请的另一个方面,提供了一种图像分类装置,该装置包括:确定模块,被配置为通过卷积神经网络确定图像对应的图像特征向量;其中,图像对应有文字描述信息;向量生成模块,被配置为通过嵌入网络对文字描述信息进行处理,得到文字特征向量;拼接模块,被配置为将图像特征向量 和文字特征向量拼接,得到图文特征向量;分类模块,被配置为根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类。
在一些实施方式中,向量生成模块包括:分词子模块,被配置为去除文字描述信息中的停用词,得到多个分词;位置确定子模块,被配置为针对每个分词,确定分词在文字特征集合中的位置信息;索引值生成子模块,被配置为依据位置信息生成分词对应的索引数值;其中,文字特征集合通过对样本图像对应的文字描述信息训练得到;第一调用子模块,被配置为调用嵌入网络,由嵌入网络依据各分词对应的索引数值确定各分词对应的描述向量;第二调用子模块,被配置为将各分词对应的描述向量同维度加权平均,得到文字特征向量。
在一些实施方式中,拼接模块包括:映射子模块,被配置为将文字特征向量和图像特征向量,映射为维度相同的向量;拼接子模块,被配置为将映射后的文字特征向量和图像特征向量进行维度拼接,生成图文特征向量。
在一些实施方式中,该装置还包括:获取模块,被配置为在确定模块通过卷积神经网络确定图像对应的图像特征向量之前,获取各样本图像;子集合确定模块,被配置为针对每个样本图像,确定样本图像是否对应文字描述信息;若否,确定样本图像对应的文字特征子集合为空;若是,去除文字描述信息中的停用词得到包含多个分词的描述集合;提取子模块,被配置为基于预设的标签体系从描述集合中提取出文字特征子集合;特征集合确定模块,被配置为将各样本图像对应的文字特征子集合求并集,得到文字特征集合。
在一些实施方式中,分类模块包括:输入子模块,被配置为分别将图像特征向量、文字特征向量以及图文特征向量输入深度神经网络,得到与图像特征向量对应的第一分类结果向量,与文字特征向量对应的第二分类结果向量,与图文特征向量对应的第三分类结果向量;处理子模块,被配置为将第一分类结果向量、第二分类结果向量以及第三分类结果向量进行加权求和,得到目标结果向量;结果确定子模块,被配置为依据目标结果向量,确定图 像对应的分类。
根据本申请的再一方面,提供了一种终端,包括:存储器、处理器及存储在存储器上并可在处理器上运行的图像分类程序,图像分类程序被处理器执行时实现本申请中的任意一种图像分类方法的步骤。
根据本申请的又一方面,提供了一种计算机可读存储介质,计算机可读存储介质上存储有图像分类程序,图像分类程序被处理器执行时实现本申请中的任意一种图像分类方法的步骤。
根据本申请的又一方面,提供了一种计算机程序产品,计算机程序产品用于在运行时执行时实现本申请中的任意一种图像分类方法的步骤。
与现有技术相比,本申请具有以下优点:
本申请实施例提供的图像分类方案,以卷积神经网络为基础作为图像特征提取的主干网络得到图像对应的图像特征向量,以嵌入网络为基础作为文字特征提取的主干网络得到图像所对应的文字描述信息的文字特征向量,将图像特征向量和文字特征向量拼接得到图文特征向量,以深度神经网络作为主干网络,依据图像特征向量、文字特征向量以及图文特征向量确定图像在不同标签下的权重从而确定图像对应的分类,能够提升图像分类的准确性。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据本申请实施例一的一种图像分类方法的步骤流程图;
图2是根据本申请实施例二的一种图像分类方法的步骤流程图;
图3是根据本申请实施例三的一种图像分类装置的结构框图;
图4是根据本申请实施例四的一种终端的结构框图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例一
参照图1,示出了本申请实施例一的一种图像分类方法的步骤流程图。
本申请实施例的图像分类方法可以包括以下步骤:
步骤101:通过卷积神经网络确定图像对应的图像特征向量。
其中,图像对应有文字描述信息。文字描述信息可以为用户在上传该图像之后,追加上传的文字描述信息,也可以为图像中包含的文字描述信息。
本申请实施例中图像可以为视频中的单帧图像,也可以仅为一个多媒体图像。一张图像输入到卷积神经网络中,经过卷积层或者池化层之后会得到图像特征图向量,图像特征向量中包含多个点,每个点对应一张特征图以及一个权重值。对于将图像输入卷积神经网络中,得到图像对应的图像特征向量的具体处理方式,参照现有相关技术即可,本申请实施例中对此不作具体限制。
步骤102:通过嵌入网络对文字描述信息进行处理,得到文字特征向量。
在具体实现过程中,对文字描述信息进行处理时先将文字信息进行分词得到多个分词,基于预设的文字特征集合确定各分词对应的描述向量,最终将各分词对应的描述向量同维度加权平均,得到文字特征向量。所得到的文字特征向量中包含多个点,每个点对应一个文字特征集合中的文字特征。
需要说明的是,步骤102并不局限于在步骤101之后执行,还可以与步骤101并行执行或者在步骤101之前执行。
步骤103:将图像特征向量和文字特征向量拼接,得到图文特征向量。
图像特征向量和文字特征向量分别包含多个维度,每个维度在向量上对 应一个点,将两个特征向量进行拼接所得图文特征向量中的各维度既可提现图像特征又可提现文字特征。例如:图像特征向量包含十个维度即十个点,文字特征向量包含十个维度,则拼接后的图文特征向量包含二十个维度。
步骤104:根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类。
深度神经网络分别确定图像特征向量、文字特征向量和图文特征向量中各点对应的概率值,得到三个分类结果向量。将三个分类结果向量进行加权平均,得到目标结果向量。从目标结果向量中将概率值最高的该点对应的特征标签确定为该图像的标签,确定标签后即可依据标签确定图像所属的分类。当然,也可直接将该标签作为图像所属的分类。
本申请实施例提供的图像分类方法,以卷积神经网络为基础作为图像特征提取的主干网络得到图像对应的图像特征向量,以嵌入网络为基础作为文字特征提取的主干网络得到图像所对应的文字描述信息的文字特征向量,将图像特征向量和文字特征向量拼接得到图文特征向量,以深度神经网络作为主干网络,依据图像特征向量、文字特征向量以及图文特征向量确定图像在不同标签下的权重从而确定图像对应的分类,能够提升图像分类的准确性。
实施例二
参照图2,示出了本申请实施例二的一种图像分类方法的步骤流程图。
本申请实施例的图像分类方法具体可以包括以下步骤:
步骤201:通过卷积神经网络确定图像对应的图像特征向量。
图像对应有文字描述信息。文字描述信息可以为用户在上传该图像之后,追加上传的文字描述信息,也可以为图像中包含的文字描述信息。
对于通过卷积神经网络确定图像对应的图像特征向量的具体方式,参照现有相关技术即可,本申请实施例中对此不作具体限定。
步骤202:去除文字描述信息中的停用词,得到多个分词。
系统中预设有停用词表,在对文字描述信息进行处理时,将文字描述信息中词组分别与停用词表匹配,若匹配成功则将该词组确定为停用词将其从 文字描述信息中去除,最终将文字描述信息中的各停用词去除,得到多个分词。其中,停用词为无实际涵义的词语,停用词表可以由本领域技术人员根据实际需求进行设置,本申请实施例中对此不作具体限制。
步骤203:针对每个分词,确定分词在文字特征集合中的位置信息,依据位置信息生成分词对应的索引数值。
文字特征集合通过对样本图像对应的文字描述信息训练得到,一种训练文字描述信息的方式如下:
首先,获取各样本图像;
样本图像可以对应有文字描述信息,也可以无对应的文字描述信息。样本图像的数量以及选取可以由本领域技术人员根据实际需求进行设置,本申请实施例中对此不作具体限制。样本数量越多则所训练得到的文字特征集合中包含的文字特征越全面。
其次,针对每个样本图像,确定样本图像是否对应文字描述信息;若否,确定样本图像对应的文字特征子集合为空;若是,去除文字描述信息中的停用词得到包含多个分词的描述集合;其中,单个样本图像对应的文字特征子集合可以用S u表示;基于预设的标签体系从描述集合中提取出文字特征子集合,其中,单个样本图像对应的文字特征子集合可以用S i表示,空集合则用null表示。
最后,将各样本图像对应的文字特征子集合求并集,得到文字特征集合。
文字特征集合
Figure PCTCN2018122432-appb-000001
其中,X表示全部训练样本图像。
例如,一个用户在上传一个美食教程时,同时输入“糖醋里脊教程,喜欢的朋友点赞”文字信息,具体处理过程如下。
通过对文字信息进行分词,得到描述集合:
S u={糖醋里脊,教程,喜欢,朋友,点赞}
由于描述的是“美食教程”在这个样本中,仅有“糖醋里脊”“教程”可以作为描述标签,故从描述集合中提取“糖醋里脊”“教程”这两个描述标签组成集合,作为该样本图像对应的文字特征子集合:
S i={糖醋里脊,教程}
步骤204:调用嵌入网络,由嵌入网络依据各分词对应的索引数值确定各分词对应的描述向量。
文字特征集合中包含多个文字特征,各文字特征在文字特征集合中分别对应一个位置,每个位置对应一个索引数值,具体地,可以将位置标号作为索引数值。在将图像对应的文字描述信息处理提取出多个分词后,每个分词将作为一个文字特征,分别确定各分词在文字特征集合中的位置,进一步依据位置与索引数值的对应关系,确定各分词对应的索引数值。
将各分词对应的索引数值输入嵌入网络,嵌入网络依据各分词对应的索引数值确定各分词对应的描述向量W i
步骤205:将各分词对应的描述向量同维度加权平均,得到文字特征向量。
通过将待预测图像的文本信息进行分词,得到多个分词。针对每个分词,确定预先训练得到的文字特征集合中是否包含该分词,若是则进一步确定该分词对应的描述向量,反之,则确定该分词无对应的描述向量。也即,分词为描述标签则为该分词生成描述向量,反之,分词不是描述标签则不为该分词生成描述向量。
各分词对应的权重可以相同也可以不同,若各分词对应的权重相同时,在可通过如下公式对各分词对应的描述向量同维度加权平均,得到文字特征向量。
Figure PCTCN2018122432-appb-000002
其中,F text为文字特征向量,N为当前图像包含的文字信息中描述标签的个数。
步骤206:将文字特征向量和图像特征向量,映射为维度相同的向量。
由于图像特征向量与文字特征向量在分别经过卷积神经网络与循环神经网络输出之后,二者并不在同一个空间即二者维度不同;此时需要分别将这两个特征向量进行空间映射,使其映射到同一空间即影射为维度相同 的向量。具体地,可以使用全连接对文字特征向量、图像特征向量中的特征进行空间映射。
步骤207:将映射后的文字特征向量和图像特征向量进行纬度拼接,生成图文特征向量。
例如:映射后的文字特征向量包含1-5五个维度,图像特征向量包含1-5五个维度,可以将图像特征向量的第一个维度拼接到文字特征向量的第五个维度后,生成包含十个维度的图文特征向量,图文特征向量中每个维度对应特征标签。
步骤208:根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类。
具体实现过程中,可以首先分别将图像特征向量、文字特征向量以及图文特征向量输入深度神经网络,得到与图像特征向量对应的第一分类结果向量,与文字特征向量对应的第二分类结果向量,与图文特征向量对应的第三分类结果向量。各分类结果向量中均包含多个点,每个点对应一个特征标签,每个点对应一个概率值。
其次,将第一分类结果向量、第二分类结果向量以及第三分类结果向量进行加权求和,得到目标结果向量。
具体地,可以通过如下公式得到目标结果向量P:
P=W textP text+W imageP image+W text-imageP text-image
其中,W image、W text以及W text-image分别为第一分类结果向量、第二分类结果向量以及第三分类结果向量的权重;P image、P text以及P text-image分别为第一分类结果向量、第二分类结果向量以及第三分类结果向量。
最后,依据目标结果向量,确定图像对应的分类。
依据目标结果向量确定图像对应的特征标签,其中特征标签为目标特征向量中概率值最高的点对应的特征标签,依据特征标签确定图像所属的分类。
本申请实施例提供的图像分类方法,以卷积神经网络为基础作为图像特征提取的主干网络得到图像对应的图像特征向量,以嵌入网络为基础作为文字特征提取的主干网络得到图像所对应的文字描述信息的文字特征向量, 将图像特征向量和文字特征向量拼接得到图文特征向量,以深度神经网络作为主干网络,依据图像特征向量、文字特征向量以及图文特征向量确定图像在不同标签下的权重从而确定图像对应的分类,能够提升图像分类的准确性。
实施例三
参照图3,示出了本申请实施例三的一种图像分类装置的结构框图。
本申请实施例的图像分类装置可以包括:确定模块301,被配置为通过卷积神经网络确定图像对应的图像特征向量;其中,图像对应有文字描述信息;向量生成模块302,被配置为通过嵌入网络对文字描述信息进行处理,得到文字特征向量;拼接模块303,被配置为将图像特征向量和文字特征向量拼接,得到图文特征向量;分类模块304,被配置为根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类。
在一些实施方式中,向量生成模块302可以包括:分词子模块3021,被配置为去除文字描述信息中的停用词,得到多个分词;位置确定子模块3022,被配置为针对每个分词,确定分词在文字特征集合中的位置信息,索引值生成子模块3023,被配置为依据位置信息生成分词对应的索引数值;其中,文字特征集合通过对样本图像对应的文字描述信息训练得到;第一调用子模块3024,被配置为调用嵌入网络,由嵌入网络依据各分词对应的索引数值确定各分词对应的描述向量;第二调用子模块3025,被配置为将各分词对应的描述向量同维度加权平均,得到文字特征向量。
在一些实施方式中,拼接模块303可以包括:映射子模块3031,被配置为将文字特征向量和图像特征向量,映射为维度相同的向量;拼接子模块3032,被配置为将映射后的文字特征向量和图像特征向量进行维度拼接,生成图文特征向量。
在一些实施方式中,装置还可以包括:获取模块305,被配置为在确定模块301通过卷积神经网络确定图像对应的图像特征向量之前,获取各样本图像;子集合确定模块306,被配置为针对每个样本图像,确定样本图像是否对应文字描述信息;若否,确定样本图像对应的文字特征子集合为空;若是, 去除文字描述信息中的停用词得到包含多个分词的描述集合;提取模块307,被配置为基于预设的标签体系从描述集合中提取出文字特征子集合;特征集合确定模块308,被配置为将各样本图像对应的文字特征子集合求并集,得到文字特征集合。
在一些实施方式中,分类模块304可以包括:输入子模块3041,被配置为分别将图像特征向量、文字特征向量以及图文特征向量输入深度神经网络,得到与图像特征向量对应的第一分类结果向量,与文字特征向量对应的第二分类结果向量,与图文特征向量对应的第三分类结果向量;处理子模块3042,被配置为将第一分类结果向量、第二分类结果向量以及第三分类结果向量进行加权求和,得到目标结果向量;结果确定子模块3043,被配置为依据目标结果向量,确定图像对应的分类。
本申请实施例的图像分类装置用于实现前述实施例一、实施例二中相应的图像分类方法,并具有与方法实施例相应的有益效果,在此不再赘述。
实施例四
参照图4,示出了本申请实施例四的一种用于图像分类的终端的结构框图。
本申请实施例的终端可以包括:存储器、处理器及存储在存储器上并可在处理器上运行的图像分类程序,图像标签确定程序被处理器执行时实现本申请中的任意一种图像分类方法的步骤。
图4是根据一示例性实施例示出的一种图像分类终端600的框图。例如,终端600可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图4,终端600可以包括以下一个或多个组件:处理组件602,存储器604,电源组件606,多媒体组件608,音频组件610,输入/输出(I/O)的接口612,传感器组件614,以及通信组件616。
处理组件602通常控制装置600的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件602可以包括一个或多个处理器620来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件602可以包括一个或多个模块,便于处理组件602和其他组件之间 的交互。例如,处理部件602可以包括多媒体模块,以方便多媒体组件608和处理组件602之间的交互。
存储器604被配置为存储各种类型的数据以支持在终端600的操作。这些数据的示例包括用于在终端600上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器604可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件606为终端600的各种组件提供电力。电源组件606可以包括电源管理系统,一个或多个电源,及其他与为终端600生成、管理和分配电力相关联的组件。
多媒体组件608包括在终端600和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件608包括一个前置摄像头和/或后置摄像头。当终端600处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件610被配置为输出和/或输入音频信号。例如,音频组件610包括一个麦克风(MIC),当终端600处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器604或经由通信组件616发送。在一些实施例中,音频组件610还包括一个扬声器,用于输出音频信号。
I/O接口612为处理组件602和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按 钮、音量按钮、启动按钮和锁定按钮。
传感器组件614包括一个或多个传感器,用于为终端600提供各个方面的状态评估。例如,传感器组件614可以检测到终端600的打开/关闭状态,组件的相对定位,例如组件为终端600的显示器和小键盘,传感器组件614还可以检测终端600或终端600一个组件的位置改变,用户与终端600接触的存在或不存在,装置600方位或加速/减速和终端600的温度变化。传感器组件614可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件614还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件614还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件616被配置为便于终端600和其他设备之间有线或无线方式的通信。终端600可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信部件616经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信部件616还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,终端600可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行图像分类方法,具体地图像分类方法包括:
通过卷积神经网络确定图像对应的图像特征向量;其中,图像对应有文字描述信息;通过嵌入网络对文字描述信息进行处理,得到文字特征向量;将图像特征向量和文字特征向量拼接,得到图文特征向量;根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类。
在一些实施方式中,通过嵌入网络对文字描述信息进行处理,得到文字特征向量的步骤,包括:去除文字描述信息中的停用词,得到多个分词;针对每个分词,确定分词在文字特征集合中的位置信息;依据位置信息生成 分词对应的索引数值;其中,文字特征集合通过对样本图像对应的文字描述信息训练得到;调用嵌入网络,由嵌入网络依据各分词对应的索引数值确定各分词对应的描述向量;将各分词对应的描述向量同维度加权平均,得到文字特征向量。
在一些实施方式中,将图像特征向量和文字特征向量拼接,得到图文特征向量的步骤,包括:将文字特征向量和图像特征向量,映射为维度相同的向量;将映射后的文字特征向量和图像特征向量进行维度拼接,生成图文特征向量。
在一些实施方式中,在通过卷积神经网络确定图像对应的图像特征向量的步骤之前,方法还包括:获取各样本图像;针对每个样本图像,确定样本图像是否对应文字描述信息;若否,确定样本图像对应的文字特征子集合为空;若是,去除文字描述信息中的停用词得到包含多个分词的描述集合;基于预设的标签体系从描述集合中提取出文字特征子集合;将各样本图像对应的文字特征子集合求并集,得到文字特征集合。
在一些实施方式中,根据深度神经网络对图像特征向量、文字特征向量以及图文特征向量的处理结果,确定图像对应的分类的步骤,包括:分别将图像特征向量、文字特征向量以及图文特征向量输入深度神经网络,得到与图像特征向量对应的第一分类结果向量,与文字特征向量对应的第二分类结果向量,与图文特征向量对应的第三分类结果向量;将第一分类结果向量、第二分类结果向量以及第三分类结果向量进行加权求和,得到目标结果向量;依据目标结果向量,确定图像对应的分类。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器604,上述指令可由终端600的处理器620执行以完成上述图像分类方法。例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。当存储介质中的指令由终端的处理器执行时,使得终端能够执行本申请中的任意一种图像分类方法的步骤。
本申请实施例提供的终端,以卷积神经网络为基础作为图像特征提取的 主干网络得到图像对应的图像特征向量,以嵌入网络为基础作为文字特征提取的主干网络得到图像所对应的文字描述信息的文字特征向量,将图像特征向量和文字特征向量拼接得到图文特征向量,以深度神经网络作为主干网络,依据图像特征向量、文字特征向量以及图文特征向量确定图像在不同标签下的权重从而确定图像对应的分类,能够提升图像分类的准确性。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
在示例性实施例中,还提供了一种计算机程序产品,计算机程序产品用于在运行时执行本申请中的任意一种图像分类方法的步骤。
对于计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
在此提供的图像分类方案不与任何特定计算机、虚拟系统或者其它设备固有相关。各种通用系统也可以与基于在此的示教一起使用。根据上面的描述,构造具有本申请方案的系统所要求的结构是显而易见的。此外,本申请也不针对任何特定编程语言。应当明白,可以利用各种编程语言实现在此描述的本申请的内容,并且上面对特定语言所做的描述是为了披露本申请的最佳实施方式。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个申请方面中的一个或多个,在上面对本申请的示例性实施例的描述中,本申请的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本申请要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如权利要求书所反映的那样,申请方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本申请的单独实施例。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本申请的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本申请实施例的图像分类方案中的一些或者全部部件的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本申请进行说明而不是对本申请进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本申请可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在 列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。

Claims (13)

  1. 一种图像分类方法,其特征在于,所述方法包括:
    通过卷积神经网络确定图像对应的图像特征向量;其中,所述图像对应有文字描述信息;
    通过嵌入网络对所述文字描述信息进行处理,得到文字特征向量;
    将所述图像特征向量和文字特征向量拼接,得到图文特征向量;
    根据深度神经网络对所述图像特征向量、文字特征向量以及所述图文特征向量的处理结果,确定所述图像对应的分类。
  2. 根据权利要求1所述的方法,其特征在于,所述通过嵌入网络对所述文字描述信息进行处理,得到文字特征向量的步骤,包括:
    去除所述文字描述信息中的停用词,得到多个分词;
    针对每个所述分词,确定所述分词在文字特征集合中的位置信息;
    依据所述位置信息生成分词对应的索引数值;其中,所述文字特征集合通过对样本图像对应的文字描述信息训练得到;
    调用嵌入网络,由所述嵌入网络依据各所述分词对应的索引数值确定各分词对应的描述向量;
    将各分词对应的描述向量同维度加权平均,得到文字特征向量。
  3. 根据权利要求1所述的方法,其特征在于,所述将所述图像特征向量和文字特征向量拼接,得到图文特征向量的步骤,包括:
    将所述文字特征向量和所述图像特征向量,映射为维度相同的向量;
    将映射后的文字特征向量和图像特征向量进行维度拼接,生成图文特征向量。
  4. 根据权利要求1所述的方法,其特征在于,在所述通过卷积神经网络确定图像对应的图像特征向量的步骤之前,所述方法还包括:
    获取各样本图像;
    针对每个样本图像,确定所述样本图像是否对应文字描述信息;若否,确定所述样本图像对应的文字特征子集合为空;若是,去除所述文字描述信息中的停用词得到包含多个分词的描述集合;
    基于预设的标签体系从所述描述集合中提取出文字特征子集合;
    将各样本图像对应的文字特征子集合求并集,得到文字特征集合。
  5. 根据权利要求1所述的方法,其特征在于,所述根据深度神经网络对所述图像特征向量、文字特征向量以及所述图文特征向量的处理结果,确定所述图像对应的分类的步骤,包括:
    分别将所述图像特征向量、文字特征向量以及所述图文特征向量输入所述深度神经网络,得到与所述图像特征向量对应的第一分类结果向量,与所述文字特征向量对应的第二分类结果向量,与所述图文特征向量对应的第三分类结果向量;
    将所述第一分类结果向量、第二分类结果向量以及所述第三分类结果向量进行加权求和,得到目标结果向量;
    依据所述目标结果向量,确定所述图像对应的分类。
  6. 一种图像分类装置,其特征在于,所述装置包括:
    确定模块,被配置为通过卷积神经网络确定图像对应的图像特征向量;其中,所述图像对应有文字描述信息;
    向量生成模块,被配置为通过嵌入网络对所述文字描述信息进行处理,得到文字特征向量;
    拼接模块,被配置为将所述图像特征向量和文字特征向量拼接,得到图文特征向量;
    分类模块,被配置为根据深度神经网络对所述图像特征向量、文字特征向量以及所述图文特征向量的处理结果,确定所述图像对应的分类。
  7. 根据权利要求6所述的装置,其特征在于,所述向量生成模块包括:
    分词子模块,被配置为去除所述文字描述信息中的停用词,得到多个分词;
    位置确定子模块,被配置为针对每个所述分词,确定所述分词在文字特征集合中的位置信息;
    索引值生成子模块,被配置为依据所述位置信息生成分词对应的索引数值;其中,所述文字特征集合通过对样本图像对应的文字描述信息训练 得到;
    第一调用子模块,被配置为调用嵌入网络,由所述嵌入网络依据各所述分词对应的索引数值确定各分词对应的描述向量;
    第二调用子模块,被配置为将各分词对应的描述向量同维度加权平均,得到文字特征向量。
  8. 根据权利要求6所述的装置,其特征在于,所述拼接模块包括:
    映射子模块,被配置为将所述文字特征向量和所述图像特征向量,映射为维度相同的向量;
    拼接子模块,被配置为将映射后的文字特征向量和图像特征向量进行维度拼接,生成图文特征向量。
  9. 根据权利要求6所述的装置,其特征在于,所述装置还包括:
    获取模块,被配置为在所述确定模块通过卷积神经网络确定图像对应的图像特征向量之前,获取各样本图像;
    子集合确定模块,被配置为针对每个样本图像,确定所述样本图像是否对应文字描述信息;若否,确定所述样本图像对应的文字特征子集合为空;若是,去除所述文字描述信息中的停用词得到包含多个分词的描述集合;
    提取模块,被配置为基于预设的标签体系从所述描述集合中提取出文字特征子集合;
    特征集合确定模块,被配置为将各样本图像对应的文字特征子集合求并集,得到文字特征集合。
  10. 根据权利要求6所述的装置,其特征在于,所述分类模块包括:
    输入子模块,被配置为分别将所述图像特征向量、文字特征向量以及所述图文特征向量输入所述深度神经网络,得到与所述图像特征向量对应的第一分类结果向量,与所述文字特征向量对应的第二分类结果向量,与所述图文特征向量对应的第三分类结果向量;
    处理子模块,被配置为将所述第一分类结果向量、第二分类结果向量以及所述第三分类结果向量进行加权求和,得到目标结果向量;
    结果确定子模块,被配置为依据所述目标结果向量,确定所述图像对应 的分类。
  11. 一种终端,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的图像分类程序,所述图像分类程序被所述处理器执行时实现如权利要求1至5中任一项所述的图像分类方法的步骤。
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有图像分类程序,所述图像分类程序被处理器执行时实现如权利要求1至5中任一项所述的图像分类方法的步骤。
  13. 一种计算机程序产品,其特征在于,所述计算机程序产品用于在运行时执行:权利要求1至5中任一项所述的图像分类方法的步骤。
PCT/CN2018/122432 2018-01-19 2018-12-20 图像分类方法、装置及终端 WO2019141042A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/932,599 US11048983B2 (en) 2018-01-19 2020-07-17 Method, terminal, and computer storage medium for image classification

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810055063.5 2018-01-19
CN201810055063.5A CN108399409B (zh) 2018-01-19 2018-01-19 图像分类方法、装置及终端

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/932,599 Continuation US11048983B2 (en) 2018-01-19 2020-07-17 Method, terminal, and computer storage medium for image classification

Publications (1)

Publication Number Publication Date
WO2019141042A1 true WO2019141042A1 (zh) 2019-07-25

Family

ID=63095037

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122432 WO2019141042A1 (zh) 2018-01-19 2018-12-20 图像分类方法、装置及终端

Country Status (3)

Country Link
US (1) US11048983B2 (zh)
CN (1) CN108399409B (zh)
WO (1) WO2019141042A1 (zh)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399409B (zh) 2018-01-19 2019-06-18 北京达佳互联信息技术有限公司 图像分类方法、装置及终端
CN109213862B (zh) * 2018-08-21 2020-11-24 北京京东尚科信息技术有限公司 物体识别方法和装置、计算机可读存储介质
CN109543714B (zh) * 2018-10-16 2020-03-27 北京达佳互联信息技术有限公司 数据特征的获取方法、装置、电子设备及存储介质
CN109871896B (zh) * 2019-02-26 2022-03-25 北京达佳互联信息技术有限公司 数据分类方法、装置、电子设备及存储介质
CN110019882B (zh) * 2019-03-18 2022-01-28 新浪网技术(中国)有限公司 一种广告创意分类方法及系统
US11205237B2 (en) 2019-07-03 2021-12-21 Aon Risk Services, Inc. Of Maryland Analysis of intellectual-property data in relation to products and services
US11803927B2 (en) * 2019-07-03 2023-10-31 Aon Risk Services, Inc. Of Maryland Analysis of intellectual-property data in relation to products and services
US11941714B2 (en) 2019-07-03 2024-03-26 Aon Risk Services, Inc. Of Maryland Analysis of intellectual-property data in relation to products and services
CN110781925B (zh) * 2019-09-29 2023-03-10 支付宝(杭州)信息技术有限公司 软件页面的分类方法、装置、电子设备及存储介质
CN110808019A (zh) * 2019-10-31 2020-02-18 维沃移动通信有限公司 一种歌曲生成方法及电子设备
CN112883218A (zh) * 2019-11-29 2021-06-01 智慧芽信息科技(苏州)有限公司 一种图文联合表征的搜索方法、系统、服务器和存储介质
CN111755118B (zh) * 2020-03-16 2024-03-08 腾讯科技(深圳)有限公司 医疗信息处理方法、装置、电子设备及存储介质
CN111832082B (zh) * 2020-08-20 2023-02-24 支付宝(杭州)信息技术有限公司 图文完整性检测方法及装置
CN112149653B (zh) * 2020-09-16 2024-03-29 北京达佳互联信息技术有限公司 信息处理方法、装置、电子设备及存储介质
CN111832581B (zh) * 2020-09-21 2021-01-29 平安科技(深圳)有限公司 肺部特征识别方法、装置、计算机设备及存储介质
US11816909B2 (en) 2021-08-04 2023-11-14 Abbyy Development Inc. Document clusterization using neural networks

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015154206A1 (en) * 2014-04-11 2015-10-15 Xiaoou Tang A method and a system for face verification
CN105760507A (zh) * 2016-02-23 2016-07-13 复旦大学 基于深度学习的跨模态主题相关性建模方法
CN107145484A (zh) * 2017-04-24 2017-09-08 北京邮电大学 一种基于隐多粒度局部特征的中文分词方法
CN107392109A (zh) * 2017-06-27 2017-11-24 南京邮电大学 一种基于深度神经网络的新生儿疼痛表情识别方法
CN108399409A (zh) * 2018-01-19 2018-08-14 北京达佳互联信息技术有限公司 图像分类方法、装置及终端

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9484022B2 (en) * 2014-05-23 2016-11-01 Google Inc. Training multiple neural networks with different accuracy
US9514391B2 (en) * 2015-04-20 2016-12-06 Xerox Corporation Fisher vectors meet neural networks: a hybrid visual classification architecture
US10095950B2 (en) * 2015-06-03 2018-10-09 Hyperverge Inc. Systems and methods for image processing
US9501724B1 (en) * 2015-06-09 2016-11-22 Adobe Systems Incorporated Font recognition and font similarity learning using a deep neural network
US10068171B2 (en) * 2015-11-12 2018-09-04 Conduent Business Services, Llc Multi-layer fusion in a convolutional neural network for image classification
GB2547712A (en) * 2016-02-29 2017-08-30 Fujitsu Ltd Method and apparatus for generating time series data sets for predictive analysis
US9847974B2 (en) * 2016-04-28 2017-12-19 Xerox Corporation Image document processing in a client-server system including privacy-preserving text recognition
US9830526B1 (en) * 2016-05-26 2017-11-28 Adobe Systems Incorporated Generating image features based on robust feature-learning
US9767557B1 (en) * 2016-06-23 2017-09-19 Siemens Healthcare Gmbh Method and system for vascular disease detection using recurrent neural networks
US10140515B1 (en) * 2016-06-24 2018-11-27 A9.Com, Inc. Image recognition and classification techniques for selecting image and audio data
US10503775B1 (en) * 2016-12-28 2019-12-10 Shutterstock, Inc. Composition aware image querying
US10089556B1 (en) * 2017-06-12 2018-10-02 Konica Minolta Laboratory U.S.A., Inc. Self-attention deep neural network for action recognition in surveillance videos
CN108256549B (zh) * 2017-12-13 2019-03-15 北京达佳互联信息技术有限公司 图像分类方法、装置及终端
US10706545B2 (en) * 2018-05-07 2020-07-07 Zebra Medical Vision Ltd. Systems and methods for analysis of anatomical images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015154206A1 (en) * 2014-04-11 2015-10-15 Xiaoou Tang A method and a system for face verification
CN105760507A (zh) * 2016-02-23 2016-07-13 复旦大学 基于深度学习的跨模态主题相关性建模方法
CN107145484A (zh) * 2017-04-24 2017-09-08 北京邮电大学 一种基于隐多粒度局部特征的中文分词方法
CN107392109A (zh) * 2017-06-27 2017-11-24 南京邮电大学 一种基于深度神经网络的新生儿疼痛表情识别方法
CN108399409A (zh) * 2018-01-19 2018-08-14 北京达佳互联信息技术有限公司 图像分类方法、装置及终端

Also Published As

Publication number Publication date
US11048983B2 (en) 2021-06-29
CN108399409B (zh) 2019-06-18
US20200356821A1 (en) 2020-11-12
CN108399409A (zh) 2018-08-14

Similar Documents

Publication Publication Date Title
WO2019141042A1 (zh) 图像分类方法、装置及终端
CN108256555B (zh) 图像内容识别方法、装置及终端
WO2019184471A1 (zh) 图像标签确定方法、装置及终端
TWI728564B (zh) 圖像的描述語句定位方法及電子設備和儲存介質
RU2659746C2 (ru) Способ и устройство обработки изображений
CN109871883B (zh) 神经网络训练方法及装置、电子设备和存储介质
WO2020134866A1 (zh) 关键点检测方法及装置、电子设备和存储介质
KR20210102180A (ko) 이미지 처리 방법 및 장치, 전자 기기 및 기억 매체
CN111539443B (zh) 一种图像识别模型训练方法及装置、存储介质
TW202113757A (zh) 目標對象匹配方法及目標對象匹配裝置、電子設備和電腦可讀儲存媒介
CN109446961B (zh) 姿势检测方法、装置、设备及存储介质
CN110598504B (zh) 图像识别方法及装置、电子设备和存储介质
CN109615006B (zh) 文字识别方法及装置、电子设备和存储介质
CN110458218B (zh) 图像分类方法及装置、分类网络训练方法及装置
CN110781813B (zh) 图像识别方法及装置、电子设备和存储介质
CN113326768B (zh) 训练方法、图像特征提取方法、图像识别方法及装置
TWI785638B (zh) 目標檢測方法、電子設備和電腦可讀儲存介質
CN110781323A (zh) 多媒体资源的标签确定方法、装置、电子设备及存储介质
CN111259967A (zh) 图像分类及神经网络训练方法、装置、设备及存储介质
CN112148980B (zh) 基于用户点击的物品推荐方法、装置、设备和存储介质
CN110717399A (zh) 人脸识别方法和电子终端设备
CN111242303A (zh) 网络训练方法及装置、图像处理方法及装置
CN111582383A (zh) 属性识别方法及装置、电子设备和存储介质
CN111046927B (zh) 标注数据的处理方法、装置、电子设备及存储介质
CN110929176A (zh) 一种信息推荐方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18901798

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18901798

Country of ref document: EP

Kind code of ref document: A1