CN107742128A - Method and apparatus for output information - Google Patents
Method and apparatus for output information Download PDFInfo
- Publication number
- CN107742128A CN107742128A CN201710984693.6A CN201710984693A CN107742128A CN 107742128 A CN107742128 A CN 107742128A CN 201710984693 A CN201710984693 A CN 201710984693A CN 107742128 A CN107742128 A CN 107742128A
- Authority
- CN
- China
- Prior art keywords
- image
- label information
- classified
- industry label
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Library & Information Science (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for output information.One embodiment of this method includes:Obtain image to be classified;Extract the characteristic information of the image to be classified;The characteristic information is imported to the image classification model pre-established, obtain the industry label information of the image to be classified, and export obtained industry label information, wherein, the corresponding relation that described image disaggregated model is used between the characteristic information and industry label information of phenogram picture, described image disaggregated model is the deep neural network model for including residual error neutral net.The embodiment improves the accuracy of industry label information generated, image to be classified by using image classification model.
Description
Technical field
The invention relates to field of computer technology, and in particular to technical field of image processing, more particularly, to
The method and apparatus of output information.
Background technology
With the fast development of Internet technology, the information that user is got by internet is increasingly abundanter.It is for example, logical
The various information related to query word can be obtained by crossing input inquiry word user, for example, the information such as text, image, voice.To scheme
As exemplified by, after user sends Query Information by terminal device to server, server can be according to each image to be pushed
Label information choose and meet the image of Query Information, and the image of selection is sent to terminal device, so that terminal device enters
Row display.In order that the image that terminal device is shown more conforms to the query word of user, it is necessary to being capable of accurate table for image setting
Up to the label information of image described content in itself.
The content of the invention
The purpose of the embodiment of the present application is to propose a kind of method and apparatus for output information.
In a first aspect, the embodiment of the present application provides a kind of method for output information, this method includes:Obtain and treat point
Class image;Extract the characteristic information of above-mentioned image to be classified;Features described above information is imported to the image classification model pre-established,
The industry label information of above-mentioned image to be classified is obtained, and exports obtained industry label information, wherein, above-mentioned image classification mould
Type be used for phenogram picture characteristic information and industry label information between corresponding relation, above-mentioned image classification model be include it is residual
The deep neural network model of poor neutral net.
In certain embodiments, above-mentioned deep neural network model also includes at least one pond layer and at least one connected entirely
Layer is connect, above-mentioned deep neural network model increases discarding layer in the training process, wherein, on the input of last pond layer is
The input for abandoning layer is stated, the output of above-mentioned discarding layer is the input of last full articulamentum.
In certain embodiments, above-mentioned deep neural network model is trained obtain in the following manner:Obtain sample
Data, wherein, above-mentioned sample data includes industry label information corresponding to sample image and sample image;Extract sample image
Characteristic information;Using deep neural network algorithm, using the characteristic information of the sample image in sample data as input, sample graph
Industry label information obtains deep neural network model as output, training as corresponding to.
In certain embodiments, above-mentioned sample data obtains in the following manner:Using user pre-set to
A few keyword, an at least image is obtained from search engine;An above-mentioned at least image is pre-processed, and is pre- place
Each image after reason sets industry label information, wherein, industry label information belongs to the industry label information collection pre-set
Close, above-mentioned industry label information set carries out cluster analysis to above-mentioned at least one keyword and obtained, above-mentioned pretreatment bag
Include at least one of following:Rotation processing, length and width change process, shading value change process, contrast change process, saturation degree change
Processing, rgb value change process.
In certain embodiments, before the characteristic information of above-mentioned image to be classified is extracted, the above method also includes:To upper
State image to be classified to be handled, obtain the image to be classified of specific dimensions.
Second aspect, the embodiment of the present application provide a kind of device for output information, and the device includes:Obtain single
Member, for obtaining image to be classified;Extraction unit, for extracting the characteristic information of above-mentioned image to be classified;Output unit, it is used for
Features described above information is imported to the image classification model pre-established, obtains the industry label information of above-mentioned image to be classified, and
Obtained industry label information is exported, wherein, above-mentioned image classification model is used for the characteristic information and industry label of phenogram picture
Corresponding relation between information, above-mentioned image classification model are the deep neural network model for including residual error neutral net.
In certain embodiments, above-mentioned deep neural network model also includes at least one pond layer and at least one connected entirely
Layer is connect, above-mentioned deep neural network model increases discarding layer in the training process, wherein, on the input of last pond layer is
The input for abandoning layer is stated, the output of above-mentioned discarding layer is the input of last full articulamentum.
In certain embodiments, above-mentioned deep neural network model is trained obtain in the following manner:Obtain sample
Data, wherein, above-mentioned sample data includes industry label information corresponding to sample image and sample image;Extract sample image
Characteristic information;Using deep neural network algorithm, using the characteristic information of the sample image in sample data as input, sample graph
Industry label information obtains deep neural network model as output, training as corresponding to.
In certain embodiments, above-mentioned sample data obtains in the following manner:Using user pre-set to
A few keyword, an at least image is obtained from search engine;An above-mentioned at least image is pre-processed, and is pre- place
Each image after reason sets industry label information, wherein, industry label information belongs to the industry label information collection pre-set
Close, above-mentioned industry label information set carries out cluster analysis to above-mentioned at least one keyword and obtained, above-mentioned pretreatment bag
Include at least one of following:Rotation processing, length and width change process, shading value change process, contrast change process, saturation degree change
Processing, rgb value change process.
In certain embodiments, said apparatus also includes:Processing unit, for handling above-mentioned image to be classified,
Obtain the image to be classified of specific dimensions.
The third aspect, the embodiment of the present application provide a kind of terminal, and the terminal includes:One or more processors;Storage
Device, for storing one or more programs, when said one or multiple programs are by said one or multiple computing devices,
So that said one or multiple processors realize the method as described in any implementation in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable recording medium, are stored thereon with computer journey
Sequence, it is characterised in that the side as described in any implementation in first aspect is realized when the computer program is executed by processor
Method.
The method and apparatus for output information that the embodiment of the present application provides, obtain image to be classified, then carry first
The characteristic information of image to be classified is taken, the characteristic information of image to be classified is finally imported to the image classification model pre-established,
The industry label information of image to be classified is obtained, and exports obtained industry label information, wherein, above-mentioned image classification model is
Include the deep neural network model of residual error neutral net, so as to by using image classification model, improve it is being generated, treat
The accuracy of the industry label information of classification chart picture.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that the application can apply to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for output information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for output information of the application;
Fig. 4 is the structural representation according to one embodiment of the device for output information of the application;
Fig. 5 is adapted for the structural representation of the computer system of the terminal device for realizing the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Be easy to describe, illustrate only in accompanying drawing to about the related part of invention.
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase
Mutually combination.Describe the application in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the implementation of the method for output information that can apply the application or the device for output information
The exemplary system architecture 100 of example.
As shown in figure 1, system architecture 100 can include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send out
Send message etc..Various client applications, such as web browser applications, purchase can be installed on terminal device 101,102,103
Species application, searching class application, JICQ, mailbox client, social platform software etc..
Terminal device 101,102,103 can be the various electronic equipments for having display screen and supporting presentation of information, bag
Include but be not limited to smart mobile phone, tablet personal computer, E-book reader, MP3 player (Moving Picture Experts
Group Audio Layer III, dynamic image expert's compression standard audio aspect 3), MP4 (Moving Picture
Experts Group Audio Layer IV, dynamic image expert's compression standard audio aspect 4) it is player, on knee portable
Computer and desktop computer etc..
Server 105 can be to provide the server of various services, such as to being shown on terminal device 101,102,103
Information provides the background server supported.Background server can carry out the processing such as analyzing to data such as the images that receives, and
Result (such as the label information generated according to image) is fed back into terminal device.
It should be noted that the method for output information that the embodiment of the present application is provided can pass through terminal device
101st, 102,103 perform, can also be performed by server 105, can also by server 105 and terminal device 101,102,
103 common execution, correspondingly, the device for output information can be arranged in terminal device 101,102,103, can also be set
Be placed in server 105, can be arranged at unit in server 105 and by other units be arranged at terminal device 101,
102nd, in 103.The application is not limited this.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realizing need
Will, can have any number of terminal device, network and server.
With continued reference to Fig. 2, it illustrates the flow of one embodiment of the method for output information according to the application
200.This is used for the method for output information, comprises the following steps:
Step 201, image to be classified is obtained.
In the present embodiment, method operation electronic equipment (such as the terminal shown in Fig. 1 thereon for output information
Equipment 101,102, image to be classified 103) can be obtained by various modes, for example, can from appointed website capture it is to be sorted
Image, the image to be classified of user's input can also be received.Herein, above-mentioned user can refer to the use for sending image to be classified
Family, for example, above-mentioned user can refer to advertiser, advertiser can send image to be classified (for example, figure to be promoted the sale of goods
Picture).Above-mentioned image to be classified can be various images, for example, product image.
Step 202, the characteristic information of image to be classified is extracted.
In the present embodiment, above-mentioned electronic equipment can extract the characteristic information of above-mentioned image to be classified.Image to be classified
Characteristic information can be various information for characterizing characteristics of image, for example, characteristic information can be the various basic of image
Key element (such as color, lines, texture etc.).
In some optional implementations of the present embodiment, in step 202, extract image to be classified characteristic information it
Before, the above method can also include:Above-mentioned image to be classified is handled, obtains the image to be classified of specific dimensions.It is above-mentioned
Processing can refer to be adjusted the size of image to be classified, be specific dimensions.
Step 203, characteristic information is imported to the image classification model pre-established, obtains the industry label of image to be classified
Information, and export obtained industry label information.
In the present embodiment, above-mentioned electronic equipment can lead the characteristic information of the image to be classified obtained in step 202
Enter the image classification model pre-established, so as to obtain the industry label information of image to be classified, above-mentioned industry label information can
For the industry belonging to the object in description image to be classified.Above-mentioned image classification model can be used for the feature of phenogram picture
Corresponding relation between information and industry label information, above-mentioned image classification model can be the depth for including residual error neutral net
Neural network model (such as can be 50 layers of residual error neutral net), as an example, can include extremely in residual error neutral net
Few residual error unit, each residual error unit in above-mentioned at least one residual error unit can be included at least
Two layers, each layer in residual error unit can be convolutional layer, batch standardization layer, activation primitive layer, be added layer etc..It is actual
In use, the quantity that residual error unit can be set according to being actually needed, and the layer included by each residual error unit,
The application is not limited this.The input of each residual error unit not only includes the output of a upper residual error unit, also
The input of a upper residual error unit is included, thus constitutes the I/O mode of parallel link, therefore, residual error nerve
Network can be such that the number of plies of deep neural network model deepens significantly, accelerate the training of deep neural network model as quick as thought, from
And improve the accuracy of image classification model generation information.
In some optional implementations of the present embodiment, above-mentioned deep neural network model can also include at least one
Individual pond layer and at least one full articulamentum, above-mentioned deep neural network model increases in the training process abandons layer (dropout
Layer), wherein, the input of last pond layer is the input of above-mentioned discarding layer, and the output of above-mentioned discarding layer is complete for last
The input of articulamentum.In deep neural network model training process, it can use discarding layer that a certain proportion of neuron is defeated
Go out zero setting, so that the generalization ability enhancing for the deep neural network model that training obtains.
A usual batch (batch) while multiple samples are trained during the hands-on of deep neural network model
Data.Above-mentioned deep neural network model can also include batch standardization layer (Batch Normalization layers), in depth
In neural network model training process, the parameter use_global_stats for the layer that batch can be standardized be arranged to false and
It is not true, i.e., calculates the statistics such as average, variance using the sample data in present lot rather than whole sample datas, can
To avoid model from not restraining.
In some optional implementations of the present embodiment, above-mentioned deep neural network model can be that above-mentioned electronics is set
It is standby or other for training the electronic equipment of above-mentioned deep neural network model to train what is obtained in the following manner:First,
Sample data can be obtained, wherein, above-mentioned sample data can include industry label letter corresponding to sample image and sample image
Breath, herein, sample image can be various images, for example, it may be product image, industry label corresponding to sample image is believed
Breath can be used for describing the industry belonging to the object in sample image.For example, sample image is the image of laser cutting machine, the sample
Industry label information corresponding to this image can be " cutting machine ".In another example sample image, which is a woman, does beauty
Image, then industry label information corresponding to the sample image can be " beauty ".Secondly, the feature letter of sample image can be extracted
Breath, the characteristic information of sample image can be the various information for characterizing sample image feature.Finally, depth god can be utilized
Through network algorithm, using the characteristic information of the sample image in sample data as input, industry label corresponding to sample image is made
For output, training obtains deep neural network model.
In some optional implementations, above-mentioned sample data can obtain in the following manner:
It is possible, firstly, at least one keyword pre-set using user, an at least image is obtained from search engine,
Herein, above-mentioned user can be the user for referring to set keyword, for example, above-mentioned user can refer to advertiser, advertisement
Master can pre-set at least one keyword.Above-mentioned electronic equipment or other electronic equipments due to obtaining sample data can
With at least one keyword input search engine (for example, Baidu, Google etc.) for setting user, so as to obtain at least one
Image.Afterwards, an above-mentioned at least image can be pre-processed, and industry label is set for pretreated each image
Information, can be that image sets industry label information according to various modes, for example, can manually set herein.In another example
Industry label information can be set for above-mentioned at least one keyword first, for example, can be keyword " how is offroad vehicle "
Industry label information " automobile " is set, afterwards, can be using industry label information corresponding to each keyword as being based on the pass
Keyword searches for the industry label information of obtained image.Herein, industry label information belongs to the industry tally set pre-set
Close, above-mentioned industry tag set carries out cluster analysis to above-mentioned at least one keyword and obtained, for example, can by it is above-mentioned extremely
A few keyword carries out text cluster, and is arranged according to cluster result to obtain industry label information, for example, can be according to cluster
As a result all kinds of industry manual sortings being related to obtain industry label information in.Above-mentioned pretreatment can include at least one of following:
Rotation processing, length and width change process, shading value change process, contrast change process, saturation degree change process, rgb value change
Processing.Using the image obtained after pretreatment as sample image training pattern, the generalization ability of model can be strengthened.
With continued reference to Fig. 3, Fig. 3 is a signal according to the application scenarios of the method for output information of the present embodiment
Figure.In Fig. 3 application scenarios, terminal device obtains image to be classified 301 first, wherein, image to be classified 301 is automobile
Image;Afterwards, the characteristic information of image to be classified 301 is extracted;Then, the characteristic information of extraction is imported into image classification model,
The industry label information " automobile " of image to be classified 301 is obtained, and exports obtained industry label information " automobile ", will be such as figure
Shown in 3.
The method that above-described embodiment of the application provides by using image classification model, improve it is being generated, treat point
The accuracy of the industry label information of class image.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, it is used to export letter this application provides one kind
One embodiment of the device of breath, the device embodiment is corresponding with the embodiment of the method shown in Fig. 2, and the device can specifically answer
For in various electronic equipments.
As shown in figure 4, the device 400 for output information of the present embodiment includes:Acquiring unit 401, extraction unit 402
With output unit 403.Wherein, acquiring unit 401 is used to obtain image to be classified;Extraction unit 402 is used to extract above-mentioned treat point
The characteristic information of class image;Output unit 403 is used to importing features described above information into the image classification model pre-established, obtains
The industry label information of above-mentioned image to be classified, and obtained industry label information is exported, wherein, above-mentioned image classification model is used
Corresponding relation between the characteristic information and industry label information of phenogram picture, above-mentioned image classification model are to include residual error god
Deep neural network model through network.
In the present embodiment, for output information device 400 acquiring unit 401, extraction unit 402 and output unit
403 specific processing and its caused technique effect can be respectively with reference to step 201, step 202 and steps in the corresponding embodiment of figure 2
Rapid 203 related description, will not be repeated here.
In some optional implementations of the present embodiment, above-mentioned deep neural network model can also include at least one
Individual pond layer and at least one full articulamentum, above-mentioned deep neural network model increase discarding layer in the training process, wherein, most
The input of the latter pond layer is the input of above-mentioned discarding layer, and the output of above-mentioned discarding layer is the defeated of last full articulamentum
Enter.
In some optional implementations of the present embodiment, above-mentioned deep neural network model can be by with lower section
Formula trains what is obtained:Sample data is obtained, wherein, above-mentioned sample data includes industry mark corresponding to sample image and sample image
Sign information;Extract the characteristic information of sample image;Using deep neural network algorithm, by the spy of the sample image in sample data
For reference breath as input, industry label information corresponding to sample image obtains deep neural network model as output, training.
In some optional implementations of the present embodiment, above-mentioned sample data can be obtained in the following manner
's:At least one keyword pre-set using user, an at least image is obtained from search engine;To above-mentioned at least one
Image is pre-processed, and sets industry label information for pretreated each image, wherein, industry label information belongs to pre-
The industry label information set first set, above-mentioned industry label information set are that cluster point is carried out to above-mentioned at least one keyword
What analysis obtained, above-mentioned pretreatment includes at least one of following:Rotation processing, length and width change process, shading value change process, contrast
Spend change process, saturation degree change process, rgb value change process.
In some optional implementations of the present embodiment, said apparatus 400 can also include:Processing unit is (in figure
It is not shown), for handling above-mentioned image to be classified, obtain the image to be classified of specific dimensions.
Below with reference to Fig. 5, it illustrates suitable for for realizing the computer system 500 of the terminal device of the embodiment of the present application
Structural representation.Terminal device shown in Fig. 5 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 5, computer system 500 includes CPU (CPU, Central Processing Unit)
501, its can according to the program being stored in read-only storage (ROM, Read Only Memory) 502 or from storage part
508 programs being loaded into random access storage device (RAM, Random Access Memory) 503 and perform it is various appropriate
Action and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.CPU 501、ROM
502 and RAM 503 is connected with each other by bus 504.Input/output (I/O, Input/Output) interface 505 is also connected to
Bus 504.
I/O interfaces 505 are connected to lower component:Importation 506 including keyboard, mouse etc.;Penetrated including such as negative electrode
Spool (CRT, Cathode Ray Tube), liquid crystal display (LCD, Liquid Crystal Display) etc. and loudspeaker
Deng output par, c 507;Storage part 508 including hard disk etc.;And including such as LAN (LAN, Local Area
Network) the communications portion 509 of the NIC of card, modem etc..Communications portion 509 is via such as internet
Network performs communication process.Driver 510 is also according to needing to be connected to I/O interfaces 505.Detachable media 511, such as disk,
CD, magneto-optic disk, semiconductor memory etc., it is arranged on as needed on driver 510, in order to the calculating read from it
Machine program is mounted into storage part 508 as needed.
Especially, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, it includes being carried on computer-readable medium
On computer program, the computer program include be used for execution flow chart shown in method program code.In such reality
To apply in example, the computer program can be downloaded and installed by communications portion 509 from network, and/or from detachable media
511 are mounted.When the computer program is performed by CPU (CPU) 501, perform what is limited in the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer-readable recording medium either the two any combination.Computer-readable recording medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.
The more specifically example of computer-readable recording medium can include but is not limited to:Electrical connection with one or more wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only storage (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer-readable recording medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media can include believing in a base band or as the data that a carrier wave part is propagated
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium beyond readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.Included on computer-readable medium
Program code any appropriate medium can be used to transmit, include but is not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in accompanying drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that marked at some as in the realization replaced in square frame
The function of note can also be with different from the order marked in accompanying drawing generation.For example, two square frames succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, and this is depending on involved function.Also to note
Meaning, the combination of each square frame and block diagram in block diagram and/or flow chart and/or the square frame in flow chart can be with holding
Function as defined in row or the special hardware based system of operation are realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set within a processor, for example, can be described as:A kind of processor bag
Include acquiring unit, extraction unit and output unit.Wherein, the title of these units is not formed to the unit under certain conditions
The restriction of itself, for example, acquiring unit is also described as " unit for obtaining image to be classified ".
As on the other hand, present invention also provides a kind of computer-readable medium, the computer-readable medium can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine computer-readable recording medium carries one or more program, when said one or multiple programs are performed by the device so that should
Device:Obtain image to be classified;Extract the characteristic information of above-mentioned image to be classified;Features described above information is imported what is pre-established
Image classification model, the industry label information of above-mentioned image to be classified is obtained, and export obtained industry label information, wherein,
The corresponding relation that above-mentioned image classification model is used between the characteristic information and industry label information of phenogram picture, above-mentioned image point
Class model is the deep neural network model for including residual error neutral net.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, carried out by above-mentioned technical characteristic or its equivalent feature
The other technical schemes for being combined and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical scheme that the technical characteristic of energy is replaced mutually and formed.
Claims (12)
- A kind of 1. method for output information, it is characterised in that methods described includes:Obtain image to be classified;Extract the characteristic information of the image to be classified;The characteristic information is imported to the image classification model pre-established, obtains the industry label letter of the image to be classified Breath, and obtained industry label information is exported, wherein, described image disaggregated model is used for the characteristic information and industry of phenogram picture Corresponding relation between label information, described image disaggregated model are the deep neural network model for including residual error neutral net.
- 2. according to the method for claim 1, it is characterised in that the deep neural network model also includes at least one pond Change layer and at least one full articulamentum, the deep neural network model increases discarding layer in the training process, wherein, last The input of individual pond layer is the input of the discarding layer, and the output of the discarding layer is the input of last full articulamentum.
- 3. according to the method for claim 1, it is characterised in that the deep neural network model is to instruct in the following manner Get:Sample data is obtained, wherein, the sample data includes industry label information corresponding to sample image and sample image;Extract the characteristic information of sample image;Using deep neural network algorithm, using the characteristic information of the sample image in sample data as input, sample image pair The industry label information answered obtains deep neural network model as output, training.
- 4. according to the method for claim 3, it is characterised in that the sample data obtains in the following manner:At least one keyword pre-set using user, an at least image is obtained from search engine;An at least image is pre-processed, and industry label information is set for pretreated each image, wherein, Industry label information belongs to the industry label information set pre-set, and the industry label information set is to described at least one Individual keyword carries out what cluster analysis obtained, and the pretreatment includes at least one of following:Rotation processing, length and width change process, Shading value change process, contrast change process, saturation degree change process, rgb value change process.
- 5. according to the method for claim 1, it is characterised in that before the characteristic information of the image to be classified is extracted, Methods described also includes:The image to be classified is handled, obtains the image to be classified of specific dimensions.
- 6. a kind of device for output information, it is characterised in that described device includes:Acquiring unit, for obtaining image to be classified;Extraction unit, for extracting the characteristic information of the image to be classified;Output unit, for the characteristic information to be imported into the image classification model pre-established, obtain the image to be classified Industry label information, and export obtained industry label information, wherein, described image disaggregated model is used for the spy of phenogram picture Reference ceases the corresponding relation between industry label information, and described image disaggregated model is the depth god for including residual error neutral net Through network model.
- 7. device according to claim 6, it is characterised in that the deep neural network model also includes at least one pond Change layer and at least one full articulamentum, the deep neural network model increases discarding layer in the training process, wherein, last The input of individual pond layer is the input of the discarding layer, and the output of the discarding layer is the input of last full articulamentum.
- 8. device according to claim 6, it is characterised in that the deep neural network model is to instruct in the following manner Get:Sample data is obtained, wherein, the sample data includes industry label information corresponding to sample image and sample image;Extract the characteristic information of sample image;Using deep neural network algorithm, using the characteristic information of the sample image in sample data as input, sample image pair The industry label information answered obtains deep neural network model as output, training.
- 9. device according to claim 8, it is characterised in that the sample data obtains in the following manner:At least one keyword pre-set using user, an at least image is obtained from search engine;An at least image is pre-processed, and industry label information is set for pretreated each image, wherein, Industry label information belongs to the industry label information set pre-set, and the industry label information set is to described at least one Individual keyword carries out what cluster analysis obtained, and the pretreatment includes at least one of following:Rotation processing, length and width change process, Shading value change process, contrast change process, saturation degree change process, rgb value change process.
- 10. device according to claim 6, it is characterised in that described device also includes:Processing unit, for handling the image to be classified, obtain the image to be classified of specific dimensions.
- 11. a kind of terminal, including:One or more processors;Storage device, for storing one or more programs,When one or more of programs are by one or more of computing devices so that one or more of processors Realize the method as described in any in claim 1-5.
- 12. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program quilt The method as described in any in claim 1-5 is realized during computing device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710984693.6A CN107742128A (en) | 2017-10-20 | 2017-10-20 | Method and apparatus for output information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710984693.6A CN107742128A (en) | 2017-10-20 | 2017-10-20 | Method and apparatus for output information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107742128A true CN107742128A (en) | 2018-02-27 |
Family
ID=61237918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710984693.6A Pending CN107742128A (en) | 2017-10-20 | 2017-10-20 | Method and apparatus for output information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107742128A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035243A (en) * | 2018-08-10 | 2018-12-18 | 北京百度网讯科技有限公司 | Method and apparatus for exporting battery pole piece burr information |
CN109064464A (en) * | 2018-08-10 | 2018-12-21 | 北京百度网讯科技有限公司 | Method and apparatus for detecting battery pole piece burr |
CN109101916A (en) * | 2018-08-01 | 2018-12-28 | 甘肃未来云数据科技有限公司 | The acquisition methods and device of video actions based on mark band |
CN109800769A (en) * | 2018-12-20 | 2019-05-24 | 平安科技(深圳)有限公司 | Product classification control method, device, computer equipment and storage medium |
CN110276364A (en) * | 2018-03-15 | 2019-09-24 | 阿里巴巴集团控股有限公司 | Training method, data classification method, device and the electronic equipment of disaggregated model |
CN110298386A (en) * | 2019-06-10 | 2019-10-01 | 成都积微物联集团股份有限公司 | A kind of label automation definition method of image content-based |
CN110737824A (en) * | 2018-07-03 | 2020-01-31 | 百度在线网络技术(北京)有限公司 | Content query method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105512676A (en) * | 2015-11-30 | 2016-04-20 | 华南理工大学 | Food recognition method at intelligent terminal |
CN106650813A (en) * | 2016-12-27 | 2017-05-10 | 华南理工大学 | Image understanding method based on depth residual error network and LSTM |
CN106921749A (en) * | 2017-03-31 | 2017-07-04 | 北京京东尚科信息技术有限公司 | For the method and apparatus of pushed information |
CN107239802A (en) * | 2017-06-28 | 2017-10-10 | 广东工业大学 | A kind of image classification method and device |
-
2017
- 2017-10-20 CN CN201710984693.6A patent/CN107742128A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105512676A (en) * | 2015-11-30 | 2016-04-20 | 华南理工大学 | Food recognition method at intelligent terminal |
CN106650813A (en) * | 2016-12-27 | 2017-05-10 | 华南理工大学 | Image understanding method based on depth residual error network and LSTM |
CN106921749A (en) * | 2017-03-31 | 2017-07-04 | 北京京东尚科信息技术有限公司 | For the method and apparatus of pushed information |
CN107239802A (en) * | 2017-06-28 | 2017-10-10 | 广东工业大学 | A kind of image classification method and device |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276364A (en) * | 2018-03-15 | 2019-09-24 | 阿里巴巴集团控股有限公司 | Training method, data classification method, device and the electronic equipment of disaggregated model |
CN110276364B (en) * | 2018-03-15 | 2023-08-08 | 阿里巴巴集团控股有限公司 | Classification model training method, data classification device and electronic equipment |
CN110737824A (en) * | 2018-07-03 | 2020-01-31 | 百度在线网络技术(北京)有限公司 | Content query method and device |
CN109101916A (en) * | 2018-08-01 | 2018-12-28 | 甘肃未来云数据科技有限公司 | The acquisition methods and device of video actions based on mark band |
CN109101916B (en) * | 2018-08-01 | 2022-07-05 | 甘肃未来云数据科技有限公司 | Video action acquisition method and device based on identification band |
CN109035243A (en) * | 2018-08-10 | 2018-12-18 | 北京百度网讯科技有限公司 | Method and apparatus for exporting battery pole piece burr information |
CN109064464A (en) * | 2018-08-10 | 2018-12-21 | 北京百度网讯科技有限公司 | Method and apparatus for detecting battery pole piece burr |
CN109064464B (en) * | 2018-08-10 | 2022-02-11 | 北京百度网讯科技有限公司 | Method and device for detecting burrs of battery pole piece |
CN109800769A (en) * | 2018-12-20 | 2019-05-24 | 平安科技(深圳)有限公司 | Product classification control method, device, computer equipment and storage medium |
CN110298386A (en) * | 2019-06-10 | 2019-10-01 | 成都积微物联集团股份有限公司 | A kind of label automation definition method of image content-based |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107742128A (en) | Method and apparatus for output information | |
CN105740402B (en) | The acquisition methods and device of the semantic label of digital picture | |
CN107908789A (en) | Method and apparatus for generating information | |
CN107491547A (en) | Searching method and device based on artificial intelligence | |
CN106874467A (en) | Method and apparatus for providing Search Results | |
CN107273503A (en) | Method and apparatus for generating the parallel text of same language | |
CN107491534A (en) | Information processing method and device | |
CN108038469A (en) | Method and apparatus for detecting human body | |
CN107679211A (en) | Method and apparatus for pushed information | |
CN107168952A (en) | Information generating method and device based on artificial intelligence | |
CN107832468A (en) | Demand recognition methods and device | |
CN107731229A (en) | Method and apparatus for identifying voice | |
CN107577807A (en) | Method and apparatus for pushed information | |
CN107590255A (en) | Information-pushing method and device | |
CN107295095A (en) | The method and apparatus for pushing and showing advertisement | |
CN109934242A (en) | Image identification method and device | |
CN110163218A (en) | Desensitization process method and device based on image recognition | |
CN107644106B (en) | Method, terminal device and storage medium for automatically mining service middleman | |
CN107590484A (en) | Method and apparatus for information to be presented | |
CN107908742A (en) | Method and apparatus for output information | |
CN109033282A (en) | A kind of Web page text extracting method and device based on extraction template | |
CN108280200A (en) | Method and apparatus for pushed information | |
CN107729928A (en) | Information acquisition method and device | |
CN109389182A (en) | Method and apparatus for generating information | |
CN106951495A (en) | Method and apparatus for information to be presented |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180227 |
|
RJ01 | Rejection of invention patent application after publication |