CN107993191A - A kind of image processing method and device - Google Patents

A kind of image processing method and device Download PDF

Info

Publication number
CN107993191A
CN107993191A CN201711243948.XA CN201711243948A CN107993191A CN 107993191 A CN107993191 A CN 107993191A CN 201711243948 A CN201711243948 A CN 201711243948A CN 107993191 A CN107993191 A CN 107993191A
Authority
CN
China
Prior art keywords
image data
target
information
destination image
special effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711243948.XA
Other languages
Chinese (zh)
Other versions
CN107993191B (en
Inventor
李科慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711243948.XA priority Critical patent/CN107993191B/en
Publication of CN107993191A publication Critical patent/CN107993191A/en
Application granted granted Critical
Publication of CN107993191B publication Critical patent/CN107993191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of image processing method and device, the described method includes:Destination image data is obtained, and the destination image data is inputted in convolutional neural networks model;Obtain the target labels information aggregate to match with the destination image data in the convolutional neural networks model, the label information in the target labels information aggregate is used for the feature classification for marking the destination image data;In image special effect storehouse, extraction target special effect processing information corresponding with the target labels information aggregate, special effect processing is carried out to the destination image data according to the target special effect processing information, obtains target special efficacy view data, and show the target special efficacy view data., can be to avoid the tedious steps brought because carrying out image special effect processing manually, so as to improve the efficiency of image real time transfer using the present invention.

Description

A kind of image processing method and device
Technical field
The present invention relates to field of computer technology, more particularly to a kind of image processing method and device.
Background technology
As what the continuous development and various emerging images of image technique were used emerges in large numbers, the frequency that user takes pictures or records a video Increasingly increase, while have substantially by image/video either tone, saturation degree or the brightness that terminal device is shot to come Lifting, and simple image display and video playing cannot meet the growing consumption of user, amusement and safe need Ask, therefore the speed and quality of pictures subsequent processing made higher requirement.
It is existing for image/video subsequent treatment mainly by add special efficacy material or adjustment image parameter come complete Into.For adding special efficacy material, user can independently select existing paster to affix to any area in image/video Domain or the Freehandhand-drawing scribble or to add background music etc. in image/video directly in image/video;To adjustment image ginseng For number, user can be used to protrude in image/video according to the target object in image/video and background adjustment image parameter Content, such as U.S. face is carried out to the face in image/video or face are finely tuned, the building in image/video is carried out Edge sharpening protrudes overall profile etc..
Image/video is handled from the foregoing, it will be observed that manually adding special efficacy material or manually adjusting image parameter Efficiency it is more low, it is difficult to accomplish in time, efficiently to image/video carry out subsequent treatment.
The content of the invention
The embodiment of the present invention provides a kind of image processing method and device, can improve the treatment effeciency to image/video.
One aspect of the present invention provides a kind of image processing method, including:
Destination image data is obtained, and the destination image data is inputted in convolutional neural networks model;
The target labels information collection to match with the destination image data is obtained in the convolutional neural networks model Close, the label information in the target labels information aggregate is used for the feature classification for marking the destination image data;
In image special effect storehouse, target special effect processing information corresponding with the target labels information aggregate is extracted, according to The target special effect processing information carries out special effect processing to the destination image data, obtains target special efficacy view data, and show Show the target special efficacy view data.
Wherein, it is described that the target mark to match with the destination image data is obtained in the convolutional neural networks model Information aggregate is signed, including:
Identify the objects' contour in the destination image data, and according to the objects' contour by the target View data is divided at least one unit destination image data;
The unit destination image data is inputted in the convolutional neural networks model, in the convolutional neural networks mould The first label information to match with the unit destination image data is obtained in type, and first label information is added to The target labels information aggregate.
Wherein, it is described to input the unit destination image data in the convolutional neural networks model, in the convolution The first label information to match with the unit destination image data is obtained in neural network model, including:
The unit destination image data is inputted in the convolutional neural networks model;
By convolution algorithm and pond computing, the target signature of the unit destination image data is extracted;
Characteristic set in the target signature and the convolutional neural networks model is subjected to similarity-rough set;
In the characteristic set, obtain similarity maximum feature corresponding to label information as with the unit mesh First label information of logo image data match.
Wherein, it is described in image special effect storehouse, extract target special effect processing corresponding with the target labels information aggregate Information, carries out special effect processing to the destination image data according to the target special effect processing information, obtains target special effect graph picture Data, and show the target special efficacy view data, including:
In described image special efficacy storehouse, extraction is corresponding with first label information in the target labels information aggregate Target special effect processing information;
The first image-region where the unit destination image data is searched, and first label information is corresponding Multi-media Material in target special effect processing information is added to described first image region;
The destination image data for having added the Multi-media Material is determined as the target special efficacy view data, and is shown The target special efficacy view data.
Wherein, further include:
Obtain the corresponding auxiliary information of the destination image data;
The second label information to match with the auxiliary information is obtained, and by second label information described in Target labels information aggregate;
The auxiliary information includes at least one in environmental parameter, equipment status parameter and image remark information keyword It is a;
It is then described that the destination image data for having added the Multi-media Material is determined as the target special efficacy view data, And before the step of showing the target special efficacy view data, further include:
In described image special efficacy storehouse, extraction is corresponding with second label information in the target labels information aggregate Target special effect processing information;
Identify the second image-region associated with the auxiliary information in the destination image data, and by described second Multi-media Material in the corresponding target special effect processing information of label information adds to second image-region.
Wherein, it is described that the destination image data for having added the Multi-media Material is determined as the target special efficacy picture number According to, and show the target special efficacy view data, including:
If the target special effect processing information further includes image adjustment parameter, according to described image adjusting parameter to described Destination image data carries out image parameter adjustment, will add the destination image data after the Multi-media Material and parameter adjustment It is determined as the target special efficacy view data, and shows the target special efficacy view data.
Wherein, further include:
If getting special efficacy switching command, in described image special efficacy storehouse, randomly select special effect processing information, as with Machine special effect processing information, and the target special efficacy view data is carried out at special efficacy renewal according to the random special effect processing information Reason, the target special efficacy view data after being updated.
Wherein, further include:
Obtain sample image data and sample label information corresponding with the sample image data, the sample label Information is used for the feature classification for marking the sample image data;
According to the mapping relations between the sample image data and the sample label information, the convolutional Neural is built Network model.
Another aspect of the present invention provides a kind of image processing apparatus, including:
First input module, convolutional Neural net is inputted for obtaining destination image data, and by the destination image data In network model;
Target labels acquisition module, for being obtained and the destination image data phase in the convolutional neural networks model Matched target labels information aggregate, the label information in the target labels information aggregate are used to mark the target image number According to feature classification;
Special efficacy processing module, in image special effect storehouse, extracting target corresponding with the target labels information aggregate Special effect processing information, carries out special effect processing to the destination image data according to the target special effect processing information, obtains target Special efficacy view data, and show the target special efficacy view data.
Wherein, the target labels acquisition module includes:
Image segmentation unit, for identifying the objects' contour in the destination image data, and according to the target The destination image data is divided at least one unit destination image data by contour of object;
First label acquiring unit, for the unit destination image data to be inputted the convolutional neural networks model In, the first label information to match with the unit destination image data is obtained in the convolutional neural networks model, and First label information is added to the target labels information aggregate.
Wherein, the first label acquiring unit, including:
Second input subelement, for the unit destination image data to be inputted in the convolutional neural networks model;
Feature extraction subelement, for by convolution algorithm and pond computing, extracting the unit destination image data Target signature;
Comparing subunit, for the target signature and the characteristic set in the convolutional neural networks model to be carried out phase Compare like degree;
Determination subelement, in the characteristic set, obtaining the label information corresponding to the feature of similarity maximum As the first label information to match with the unit destination image data.
Wherein, the special efficacy processing module includes:
First extraction unit, in described image special efficacy storehouse, extracting and the institute in the target labels information aggregate State the corresponding target special effect processing information of the first label information;
First adding device, for searching the first image-region where the unit destination image data, and will described in Multi-media Material in the corresponding target special effect processing information of first label information is added to described first image region;
Display unit, for the destination image data for having added the Multi-media Material to be determined as the target special effect graph As data, and show the target special efficacy view data.
Wherein, further include:
Auxiliary information acquisition module, for obtaining the corresponding auxiliary information of the destination image data;
Second label acquisition module, for obtaining the second label information to match with the auxiliary information, and by described in Second label information is added to the target labels information aggregate;
The auxiliary information includes at least one in environmental parameter, equipment status parameter and image remark information keyword It is a;
Then the special efficacy processing module further includes:
Second extraction unit, in described image special efficacy storehouse, extracting and the institute in the target labels information aggregate State the corresponding target special effect processing information of the second label information;
Second adding device, for identifying the second image associated with the auxiliary information in the destination image data Region, and the Multi-media Material in the corresponding target special effect processing information of second label information is added to second image Region.
Wherein, the display unit, if further including image adjustment parameter specifically for the target special effect processing information, Image parameter adjustment is carried out to the destination image data according to described image adjusting parameter, the Multi-media Material will have been added And the destination image data after parameter adjustment is determined as the target special efficacy view data, and show the target special efficacy picture number According to.
Wherein, further include:
Module is randomly selected, if for getting special efficacy switching command, in described image special efficacy storehouse, randomly selects spy Processing information is imitated, as random special effect processing information, and according to the random special effect processing information to the target special effect graph picture Data carry out special efficacy renewal processing, the target special efficacy view data after being updated.
Wherein, further include:
Sample label acquisition module, for obtaining sample image data and sample corresponding with the sample image data Label information, the sample label information are used for the feature classification for marking the sample image data;
Module is built, for according to the mapping relations between the sample image data and the sample label information, structure Build the convolutional neural networks model.
Another aspect of the present invention provides a kind of image processing apparatus, including:Processor and memory;
The processor is connected with memory, wherein, the memory is used for store program codes, and the processor is used for Said program code is called, to perform such as the method in one side in the embodiment of the present invention.
On the other hand the embodiment of the present invention provides a kind of computer-readable storage medium, the computer-readable storage medium is stored with Computer program, the computer program include programmed instruction, and described program instructs when being executed by a processor, performs such as this hair Method in bright embodiment in one side.
The embodiment of the present invention inputs convolutional neural networks by obtaining destination image data, and by the destination image data In model;Obtained and the target image number in the convolutional neural networks model using the classification feature of convolutional neural networks According to the target labels information aggregate to match, the label information in the target labels information aggregate is used to mark the target figure As the feature classification of data;In image special effect storehouse, target special effect processing corresponding with the target labels information aggregate is extracted Information, carries out special effect processing to the destination image data according to the target special effect processing information, obtains target special effect graph picture Data, and show the target special efficacy view data.It can be seen from the above that entirely to the process of destination image data special effect processing In, without manually participate in just can automatically recognition target image data feature classification, and destination image data is carried out automatically Special effect processing, and then can be to avoid the tedious steps brought because carrying out image special effect processing manually, so as to improve view data The efficiency of processing.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of schematic diagram of a scenario of image processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 2 a and Fig. 2 b are a kind of interface schematic diagrams of image processing method provided in an embodiment of the present invention;
Fig. 3 is a kind of flow diagram for obtaining target labels information aggregate provided in an embodiment of the present invention;
Fig. 4 is the flow diagram of another image processing method provided in an embodiment of the present invention;
Fig. 4 a and Fig. 4 b are the interface schematic diagrams of another image processing method provided in an embodiment of the present invention;
Fig. 5 is the flow diagram of another image processing method provided in an embodiment of the present invention;
Fig. 5 a and Fig. 5 b are the interface schematic diagrams of another image processing method provided in an embodiment of the present invention;
Fig. 6 is the flow diagram of another image processing method provided in an embodiment of the present invention;
Fig. 7 is a kind of structure diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 8 is a kind of structure diagram of first label acquiring unit provided in an embodiment of the present invention;
Fig. 9 is the structure diagram of another image processing apparatus provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other without creative efforts Embodiment, belongs to the scope of protection of the invention.
Fig. 1 is referred to, is a kind of schematic diagram of a scenario of image processing method provided in an embodiment of the present invention.As shown in Figure 1, User can open the application 100a (for example, photo application, Video Applications etc.) of storage image or video from smart mobile phone, Selection picture 200a can also open camera as the destination image data for needing special effect processing, certain user from application 100a Using and captured in real-time photo either video using the video frame of the photo of captured in real-time or video as destination image data. Picture 200a inputs are had been built up in the convolutional neural networks model of completion, utilize the classification work(of above-mentioned convolution god network model Label information corresponding with picture 200a can be obtained, is respectively label information " women's bag " and label information " luxury goods ", and will mark Sign information " women's bag " and label information " luxury goods " is added in target labels information aggregate.From being stored in local image special effect In storehouse, target special effect processing information corresponding with the label information " women's bag " in target labels information aggregate is searched, the target is special It is for (being picture/mb-type added to the picture tag comprising " Mei Mei rattles away " word in destination image data to imitate processing information Multi-media Material);Similarly, search at target special efficacy corresponding with the label information " luxury goods " in target labels information aggregate Information is managed, which is for including " the picture mark of the words of $ 88888 " added in destination image data Label, therefore the target special effect processing information according to corresponding to destination image data, add in picture 200a and include " Mei Mei rattles away " The picture tag of word and in picture 200a addition include the " picture tag of the words of $ 88888 ".After the completion of special effect processing, Target special efficacy view data on the screen of smart mobile phone after the completion of special display effect processing is for user's preview, if user is satisfied with Above-mentioned target special efficacy view data, can be stored in intelligent hand by clicking on " definite " button by above-mentioned target special efficacy view data In machine local folders or directly it is uploaded to social network sites (for example, wechat circle of friends, QQ space etc.);On if user is unsatisfied with Target special efficacy view data is stated, the target special efficacy view data can be deleted by click " cancellation " button, and in image special effect Target special effect processing information is randomly selected in storehouse special efficacy renewal is carried out to destination image data as target special effect processing information, Target special efficacy view data on the screen of smart mobile phone after special display effect renewal.
Wherein, the idiographic flow for special effect processing being carried out to image may refer to embodiment corresponding to figure 2 below to Fig. 6.
Further, Fig. 2 is referred to, is a kind of flow diagram of image processing method provided in an embodiment of the present invention. As shown in Fig. 2, the method may include:
Step S101, obtains destination image data, and the destination image data is inputted in convolutional neural networks model;
Specifically, terminal device can be applied by user's slave phase volume or Video Applications in the picture or video that select Video frame determine destination image data, then, by definite destination image data input convolutional neural networks model, convolution Neural network model can include input layer, convolutional layer, pond layer, full articulamentum and output layer, therefore destination image data is first First input to the input layer in convolutional neural networks model, the convolutional neural networks model is by establishment model grader energy Enough feed forward type neutral nets for being detected and identifying to image.Wherein, terminal device can include mobile phone, tablet computer, pen Remember this computer, palm PC, mobile internet device (MID, mobile internet device), POS (Point Of Sales, point of sale) machine, wearable device (such as intelligent watch, Intelligent bracelet etc.) or other have storage view data or The terminal device of video data function, the destination image data is either picture can also be any video in video Frame.
Step S102, obtains the target mark to match with the destination image data in the convolutional neural networks model Information aggregate is signed, the label information in the target labels information aggregate is used for the feature class for marking the destination image data Not;
Specifically, destination image data is inputted to the input layer in convolutional neural networks model, due to the convolutional Neural Network model is completed by sample image data and sample label information corresponding with sample image data training, That is possesses image classification function.Therefore using the feature extraction and classification feature of convolutional neural networks model obtain with The label information that destination image data matches, and above-mentioned label information is added in target labels information aggregate.Due to mesh The diversity of target object is (for example, existing sky, automobile in destination image data, also have personage and build in logo image data Build thing etc.), therefore can also may be used with the label information in the matched target labels information aggregate of destination image data comprising one With comprising multiple.Please also refer to Fig. 4 a, Fig. 4 a are a kind of interface signals of image processing method provided in an embodiment of the present invention Figure, and by taking Fig. 4 a as an example, due to the destination image data in Fig. 4 a both feature including building or the feature including animal, because What this was obtained from convolutional neural networks model is respectively with the matched label information of destination image data in Fig. 4 a:Label is believed " building " and label information " animal " are ceased, and above-mentioned two label information is added in target labels information aggregate.If mesh Label information in mark label information set has multiple, it is also necessary to determines the figure where the feature corresponding to each label information As region, in fig.4, where the feature " doggie " corresponding to the label information " animal " in target labels information aggregate Image-region is image-region 300a, the feature " house " corresponding to label information " building " in target labels information aggregate The image-region at place is image-region 500a.
Wherein, the label information in target labels information aggregate is used for the feature classification for marking the destination image data, Feature classification can represent the colour type feature and texture category feature of object corresponding to image or image-region, can also It is the shape facility classification for representing image essential attribute, can also be the space for representing multiple target object location relations in image Relationship characteristic classification.Label information can be that numeral can also be the character information that there is mark to distinguish meaning for other.For example, mark Label information is numerical value 1, then corresponding feature classification is " personage ";Label information is numerical value 2, then corresponding feature classification is " to build Build thing " etc., in other words there are a mapping relations with feature classification for label information.
Step S103, in image special effect storehouse, extracts target special effect processing corresponding with the target labels information aggregate Information, carries out special effect processing to the destination image data according to the target special effect processing information, obtains target special effect graph picture Data, and show the target special efficacy view data.
Specifically, where determining the feature in target labels information aggregate and set corresponding to each label information After image-region, target special effect processing information corresponding with above-mentioned target labels information aggregate, mesh are extracted in image special effect storehouse It can be (for example, when destination image data is for the Multi-media Material added to destination image data to mark special effect processing information During one width wedding scene picture, target special effect processing information is for the love picture added to destination image data;Work as target When view data is a width rain scape picture, target special effect processing information is for the lightning picture added to destination image data; When destination image data is the video frame in video, target special effect processing information is for the back of the body added to destination image data Scape music etc.), target special effect processing information can be also used for the image parameter of adjustment destination image data, and image parameter can wrap Include brightness or saturation degree.Adjusting the process of the image parameter of destination image data can be:By the saturation of destination image data Degree, which dims, either to be lightened or heightens or turn down the brightness of destination image data (if for example, identifying destination image data In target object include the sun, then target special effect processing information can include the saturation degree for being used for lightening destination image data Image adjustment parameter;If identifying the target object in destination image data includes rainwater, target special effect processing information can be with Include the image adjustment parameter of the brightness for reducing destination image data).Then, believed according to the target special effect processing extracted The image-region where the feature corresponding to each label information in breath and target labels information aggregate, in corresponding figure As region progress special effect processing, target special efficacy view data is obtained after the completion of special effect processing, and shown on the screen of terminal device Show the target special efficacy view data, so as to the above-mentioned target special efficacy view data of user's preview, if user is satisfied with the target special effect graph As data can be preserved directly to photo application or Video Applications.
Wherein, image special effect storehouse can be stored in the local file of terminal device, when searching target special effect processing information Directly searched in local file and obtain target special effect processing information;It can also be stored in cloud server, search target By network access cloud server and target special effect processing information is obtained during special effect processing information.
Fig. 2 a and Fig. 2 b are referred to, are a kind of interface schematic diagrams of image processing method provided in an embodiment of the present invention.Such as Shown in Fig. 2 a, the destination image data of user's selection is shown on the screen of terminal device, which is inputted into convolution In neural network model, it is " friendship to be obtained from above-mentioned convolutional neural networks model with the matched label information of destination image data Logical instrument ", and label information " vehicles " is added in target labels information aggregate, determine label information " vehicles " Image-region where corresponding feature " automobile " is the image-region 600a in Fig. 2 a.Searched and target mark in image library It is for added to target image number to sign the corresponding target special effect processing information of label information " vehicles " in information aggregate Tail gas picture in, after terminal device extracts tail gas picture from local file or cloud server, added to target figure As the image-region 600a in data, the target special efficacy view data after addition tail gas picture as shown in Figure 2 b is obtained.
Further, Fig. 3 is referred to, is a kind of flow for obtaining target labels information aggregate provided in an embodiment of the present invention Schematic diagram.As shown in figure 3, the step of step S201- step S205 is the tool to step S102 in embodiment corresponding to above-mentioned Fig. 2 The step of body describes, i.e. step S201- steps S205 is a kind of acquisition target labels information aggregate provided in an embodiment of the present invention Idiographic flow, specifically may include steps of:
Step S201, identifies the objects' contour in the destination image data, and according to the objects' contour The destination image data is divided at least one unit destination image data;
, can be by counting the color histogram of destination image data specifically, if destination image data is color image To extract the color characteristic of destination image data, and then identify the objects' contour in the destination image data, color histogram Figure is the statistical chart of width different color proportion in view picture destination image data;Can also be by calculating gray scale symbiosis square Four key features (energy, inertia, entropy and correlation) in battle array extract the textural characteristics of destination image data, and then identify Objects' contour in the destination image data, and split the destination image data according to the objects' contour of identification For one or more unit destination image data, at the same record the image-region where each unit destination image data with Easy to subsequently carry out special effect processing in corresponding image-region, each unit destination image data is destination image data tool There is the image-region of peculiar property.Fig. 4 b are referred to, are the interfaces of another image processing method provided in an embodiment of the present invention Schematic diagram, and by taking Fig. 4 b as an example, the existing animal of target object in destination image data also has building, then by the target image Data are divided into two unit destination image datas, and a unit destination image data includes animal character image, another list Position destination image data includes building feature image.
Optionally, by directly invoking image segmentation algorithm (for example, Threshold Segmentation Algorithm, Region Segmentation Algorithm or side Edge partitioning algorithm), destination image data is divided into one or more unit destination image data, while record each list Image-region where the destination image data of position.
In consideration of it, to more fully understand this programme, the embodiment of the present invention only by taking a unit destination image data as an example, with Further perform follow-up step S202- steps S205.
Step S202, the unit destination image data is inputted in the convolutional neural networks model;
Specifically, in order to improve the accuracy rate of image recognition, the unit destination image data after segmentation can be adjusted to Fixed size, then by the input in the unit destination image data input convolutional neural networks model after adjustment size Layer.Can by the unit destination image data stochastic inputs convolutional neural networks model after segmentation can also according to obtain unit The sequencing of destination image data is sequentially input in convolutional neural networks model.It is understood that above-mentioned convolutional Neural net Network complete in advance by structure, and the parameter size of the input layer is equal to the size of the unit destination image data after adjusting size.
Step S203, by convolution algorithm and pond computing, extracts the target signature of the unit destination image data;
Specifically, after unit destination image data is inputted to the output layer of convolutional neural networks, convolutional layer is subsequently entered, The fritter in unit destination image data is randomly selected first as sample, and it is special from this small sample learning to some Sign, then slips over all pixels region of unit destination image data successively by the use of this sample as a window, that is, Say, from sample learning to feature do convolution with unit destination image data so that obtain unit destination image data difference Most significant feature on position.After convolution algorithm is finished, the feature of unit destination image data has been extracted, but has only been led to It is big to cross the feature quantity of convolution algorithm extraction, in order to reduce calculation amount, also needs to carry out pond computing, that is, will be from unit target The feature extracted in view data by convolution algorithm is transmitted to pond layer, carries out aggregate statistics to the feature of extraction, these systems The order of magnitude for counting feature will be well below the order of magnitude for the feature that convolution algorithm extracts, while can also improve classifying quality, often Pond method mainly includes average pond operation method and maximum pond operation method.Average pond operation method is one The feature that an average characteristics represent this feature set is calculated in a characteristic set;Maximum pond computing is in a feature set The feature that maximum feature represents this feature set is extracted in conjunction.By convolution algorithm and pond computing, unit can be extracted The most significant target signature of destination image data, while the quantity of the target signature is few.It is worth noting that, convolutional neural networks In convolutional layer can only have one layer can also have multilayer, similarly pond layer can only have one layer can also have multilayer.
Step S204, similarity ratio is carried out by the characteristic set in the target signature and the convolutional neural networks model Compared with;
Specifically, the output layer of trained convolutional neural networks model is a grader, the node of the grader Number is identical with the classification number of the sample label information, also consistent with the feature classification number of sample image data, the grader Each classification node includes the feature extracted from the sample image data corresponding to each sample label information category Set, how many sample label information category is with regard to how many characteristic set.When the target of extraction unit destination image data After feature, the characteristic set of the output layer in above-mentioned target signature and convolutional neural networks model is subjected to similarity-rough set, phase It can illustrate that the similarity of two features is bigger, distance by the way of distance metric, if the distance of two features is nearer like degree Metric form can include Euclidean distance metric form, mahalanobis distance metric form or Hamming distance metric form.
Step S205, in the characteristic set, obtain similarity maximum feature corresponding to label information as with The first label information that the unit destination image data matches, and first label information is added to the target mark Sign information aggregate.
Specifically, after the characteristic set in target signature and convolutional neural networks model is compared, similarity is obtained Sample label information corresponding to maximum feature, and the sample label information got is determined as and the unit target image First label information of Data Matching, and first label information is added in target labels information aggregate, in other words, target It can include the first label information in label information set, which can be used for subsequently searching corresponding target spy Imitate processing information and then perform special effect processing.
For example, the output layer in convolutional neural networks model has 3 characteristic sets, and characteristic set A is from sample mark The characteristic set extracted in sample image data corresponding to label information " personage ";Characteristic set B " is built from sample label information Build thing " corresponding to sample image data in the characteristic set that extracts;Characteristic set C is right from sample label information " animal " institute The characteristic set extracted in the sample image data answered, the target signature extracted from unit destination image data are characterized " d ", Feature in feature " d " and characteristic set A, characteristic set B, characteristic set C is subjected to distance metric, if feature " d " and feature The distance of feature in set B is minimum, then confirms that the first label information corresponding to the unit destination image data is characterized collection Close the sample label information corresponding to B, that is to say, that matched first label information is " to build with the unit destination image data institute Build thing ", and the first label information " building " is added in target labels information aggregate.
The embodiment of the present invention inputs convolutional neural networks by obtaining destination image data, and by the destination image data In model;Using the classification feature of convolutional neural networks model, obtained and the target in the convolutional neural networks model The target labels information aggregate that view data matches, the label information in the target labels information aggregate are used to mark described The feature classification of destination image data;In image special effect storehouse, it is special to extract target corresponding with the target labels information aggregate Processing information is imitated, special effect processing is carried out to the destination image data according to the target special effect processing information, obtains target spy View data is imitated, and shows the target special efficacy view data.It can be seen from the above that entirely to destination image data special effect processing During, without manually participate in just can automatically recognition target image data feature classification, and automatically to destination image data Special effect processing is carried out, and then can be to avoid the tedious steps brought because carrying out image special effect processing manually, so as to improve image The efficiency of data processing.Multiple feature classifications in recognition target image data at the same time, and performance objective label information collection respectively Target special effect processing information corresponding to multiple label informations in conjunction, can enrich special effect processing mode, improve special effect processing Effect.
Further, Fig. 4 is referred to, is the flow signal of another image processing method provided in an embodiment of the present invention Figure.As shown in figure 4, the method may include following steps:
Step S301, obtains destination image data, and the destination image data is inputted in convolutional neural networks model;
Step S302, obtains the target mark to match with the destination image data in the convolutional neural networks model Information aggregate is signed, the label information in the target labels information aggregate is used for the feature class for marking the destination image data Not;
Wherein, the specific implementation process of step S301- steps S302 can be found in embodiment corresponding to above-mentioned Fig. 2 to step The description of S101- steps S102, will not continue to be repeated here.
Step S303, in described image special efficacy storehouse, extraction and first mark in the target labels information aggregate Sign the corresponding target special effect processing information of information;
Specifically, the first label information in target labels information aggregate, in image special effect storehouse extraction with this The corresponding target special effect processing information of one label information, target special effect processing information can include being used to be added to target image number Word, picture in, or for purchase link added to target object in destination image data etc..If target labels are believed The first label information more than one in breath set, then it is right to extract each first label information institute respectively in image special effect storehouse The target special effect processing information answered.The acquisition process of first label information may refer in above-mentioned Fig. 3 to step S201- steps The description of S205.
Step S304, searches the first image-region where the unit destination image data, and by first label Multi-media Material in the corresponding target special effect processing information of information is added to described first image region;
Specifically, due to unit destination image data be according to the contours segmentation of target object in destination image data and Come, for the picture material in the follow-up properer destination image data of special effect processing, first look for unit destination image data First image-region at place, secondly will extract target special effect processing corresponding with the first label information from image special effect storehouse Multi-media Material in information is added to above-mentioned first image-region.If destination image data includes multiple unit target image numbers According to, then search the first image-region where each unit destination image data respectively, and by each and unit target figure The Multi-media Material in target special effect processing information as corresponding to the first label information of Data Matching is successively added to correspondence The first image-region.Optionally, if detecting corresponding first label of two or more unit destination image datas Having incidence relation between information, (incidence relation can be pre-set, and such as pre-set label information A and label letter Breath B has incidence relation, and pre-sets label information A and label information C and also have incidence relation), and identify these lists The pixel distance between target object in the destination image data of position is less than distance threshold, then can extract these unit target figures As the target special effect processing information of the corresponding first label information co-map of data, and by the target special effect processing information Multi-media Material is added to the first image-region comprising these unit destination image datas, and (i.e. first image-region includes this Target object in a little unit destination image datas).For example, two unit target figures can be marked off from destination image data As data, corresponding first label information of one of unit destination image data is " woman ", another unit target image Corresponding first label information of data (has for " man " between the first label information " woman " and the first label information " man " Incidence relation), and the pixel distance between the target object (i.e. character image) in the two unit destination image datas is less than Distance threshold, then can find out with the first label information " woman ", the first label information " man " altogether in image special effect storehouse With mapping target special effect processing information (Multi-media Material in the target special effect processing information is love picture), and by this two The common residing image-region of a unit destination image data is determined as the first image-region, and (i.e. first image-region includes female People's image and man's image), and the love picture in target special effect processing information is added to first image-region, such as can be with Love picture is added to the region among man's image and woman's image.
Step S305, is determined as the target special efficacy picture number by the destination image data for having added the Multi-media Material According to, and show the target special efficacy view data.
Specifically, the destination image data that with the addition of Multi-media Material is determined as target special efficacy view data, and The target special efficacy view data is shown on the screen of terminal device, user can directly preserve the mesh by clicking on " preservation " button Special efficacy view data is marked into local folders, or clicks on " deletion " button and deletes the target special efficacy view data.
Again please also refer to Fig. 4 a and 4b, by identifying the objects' contour in Fig. 4 a in destination image data, by mesh Logo image data are divided into two unit destination image datas, and a unit destination image data includes dwelling feature image, separately One unit destination image data includes doggie characteristic image, and searches first where above-mentioned two unit destination image data Image-region is respectively image-region 500a and image-region 300a.Above-mentioned two unit destination image data is inputted into volume respectively In product neural network model, feature extraction and image classification function using convolutional neural networks model, obtain and include house The first label information that the unit destination image data of characteristic image matches is " building ", obtains and includes doggie characteristic pattern The first label information that the unit destination image data of picture matches is " animal ", in image special effect storehouse, extraction and the first mark The corresponding target special effect processing information of label information " building " is the flash of light picture for being added in destination image data, is obtained Target special effect processing information corresponding with the first label information " animal " is for being added to footprint picture in destination image data. Above-mentioned flash of light picture is added to the first image-region 500a, and footprint picture is added to the first image-region 300a, is obtained Target special efficacy view data as shown in Figure 4 b, and the target special efficacy view data is shown on the screen of terminal device.
The embodiment of the present invention inputs convolutional neural networks by obtaining destination image data, and by the destination image data In model;Using the classification feature of convolutional neural networks model, obtained and the target in the convolutional neural networks model The target labels information aggregate that view data matches, the label information in the target labels information aggregate are used to mark described The feature classification of destination image data;In image special effect storehouse, it is special to extract target corresponding with the target labels information aggregate Processing information is imitated, special effect processing is carried out to the destination image data according to the target special effect processing information, obtains target spy View data is imitated, and shows the target special efficacy view data.It can be seen from the above that entirely to destination image data special effect processing During, without manually participate in just can automatically recognition target image data feature classification, and automatically to destination image data Special effect processing is carried out, and then can be to avoid the tedious steps brought because carrying out image special effect processing manually, so as to improve image The efficiency of data processing.Multiple feature classifications in recognition target image data at the same time, and performance objective label information collection respectively Target special effect processing information corresponding to multiple label informations in conjunction, can enrich special effect processing mode, improve special effect processing Effect.
Further, Fig. 5 is referred to, is the flow signal of another image processing method provided in an embodiment of the present invention Figure.As shown in figure 5, the method may include following steps:
Step S401, obtains destination image data, and the destination image data is inputted in convolutional neural networks model;
Step S402, obtains the target mark to match with the destination image data in the convolutional neural networks model Information aggregate is signed, the label information in the target labels information aggregate is used for the feature class for marking the destination image data Not;
Wherein, the specific implementation process of step S301- steps S302 can be found in embodiment corresponding to above-mentioned Fig. 2 to step The description of S101- steps S102, will not continue to be repeated here.
Step S403, in described image special efficacy storehouse, extraction and first mark in the target labels information aggregate Sign the corresponding target special effect processing information of information;
Step S404, searches the first image-region where the unit destination image data, and by first label Multi-media Material in the corresponding target special effect processing information of information is added to described first image region;
Wherein, the specific implementation of step S403 and step S404 can be found in embodiment corresponding to above-mentioned Fig. 4 to step The description of rapid S303 and step S304, and the acquisition process of the first label information may refer to walk step S201- in above-mentioned Fig. 3 The description of rapid S205, will not continue to be repeated here.
Step S405, obtains the corresponding auxiliary information of the destination image data;
Specifically, terminal device obtains auxiliary information corresponding with destination image data, the auxiliary information includes environment It is at least one in parameter, equipment status parameter and image remark information keyword.The environmental parameter includes time, position Put or weather in one or more;The equipment status parameter includes one kind or more in temperature, acceleration and speed Kind;Described image remark information keyword is by obtaining remark information input by user, and key is utilized from the remark information Word extraction algorithm (for example, word frequency-reverse document-frequency algorithm, topic model algorithm, text sort algorithm etc.) extracts Keyword.When environmental parameter is the time, reading the EXIF of destination image data, (Exchangeable Image File, can hand over Change image file) image information, the time corresponding with destination image data is obtained from exif image information.When environmental parameter is Position and destination image data are to be come by user using camera applications captured in real-time, can pass through GPS (Global Positioning System, global positioning system) positioning terminal equipment photographic subjects view data when position, then to above-mentioned Position is identified, to identify position corresponding with destination image data;If destination image data be from local photograph album application or The view data extracted in person's Video Applications, can also obtain corresponding with destination image data from exif image information Position.When environmental parameter is weather, by inquiring about the weather residing for the destination image data corresponding time, obtain and target figure As the corresponding weather of data.When equipment status parameter is temperature, temperature corresponding with destination image data is obtained by thermometer Degree.When device parameter is acceleration, is perceived and added by G-sensor (Accelerometer-sensor, acceleration transducer) The change of turn of speed, obtains acceleration corresponding with destination image data, similarly, when device parameter is speed, is passed by speed The change of sensor perception velocities, to obtain speed corresponding with destination image data.
Step S406, obtains the second label information to match with the auxiliary information, and by second label information Added to the target labels information aggregate;
Specifically, after obtaining auxiliary information corresponding with destination image data, then obtain what is matched with auxiliary information Second label information, and above-mentioned second label information is added in target labels information aggregate, it is notable that target mark Both included in label information aggregate the first label information for being obtained from convolutional neural networks and also including with auxiliary information matched the Two label informations.After the time in environmental parameter is obtained, the time range residing for the time is identified, obtain and the time Corresponding second label information of scope, and above-mentioned second label information is added in target labels information aggregate, for example, from It is the time 23 that the time corresponding with destination image data is obtained in exif image information:00, recognition time 23:00 time range It it is the late into the night, therefore it is " night " to obtain the second label information corresponding with the late into the night, and the second label information " night " is added to mesh Mark in label information set.After the position in environmental parameter is obtained, identification and the second label information of the position correspondence, and Above-mentioned second label information is added in target labels information aggregate, for example, by GPS positioning and identifying and target image The corresponding position of data is Beijing's Imperial Palace, identifies that corresponding with Beijing's Imperial Palace the second label information is " building ", by the Two label informations " building " are added in target labels information aggregate.After the weather in environmental parameter is obtained, acquisition and institute Corresponding second label information of weather is stated, and above-mentioned second label information is added in target labels information aggregate, for example, logical It is " sunny " to cross weather application in terminal device and extract weather corresponding with destination image data, therefore is obtained " fine with weather It is bright " corresponding second label information is " sunlight ", the second label information " sunlight " is added in target labels information aggregate.When After obtaining the temperature in equipment status parameter, corresponding with the temperature the second label information is identified, and by above-mentioned second label Information is added in target labels information aggregate, can similarly be corresponded to speed, acceleration and image remark information keyword The second label information be added separately in target labels information aggregate.
Step S407, in described image special efficacy storehouse, extraction and second mark in the target labels information aggregate Sign the corresponding target special effect processing information of information;
Specifically, the second label information in target information set, extraction and second mark in image special effect storehouse The corresponding target special effect processing information of information is signed, target special effect processing information can include being used to be added in destination image data Word, addition picture, or for added in destination image data target object purchase link etc..If target labels are believed The second label information more than one in breath set, then it is right to extract each second label information institute respectively in image special effect storehouse The target special effect processing information answered.
Step S408, identifies the second image-region associated with the auxiliary information in the destination image data, and Multi-media Material in the corresponding target special effect processing information of second label information is added to second image-region;
Specifically, since the second label information is got by the corresponding auxiliary information of destination image data, in order to follow-up The properer destination image data of special effect processing in picture material, identify first in destination image data with auxiliary information phase Associated second image-region, secondly will extract target special effect processing corresponding with the second label information from image special effect storehouse Multi-media Material in information is added to above-mentioned second image-region, if target labels information aggregate includes multiple second labels Information, then identify second image-region associated with each matched auxiliary information of the second label information, Yi Jiti respectively Target special effect processing information corresponding with each the second label information is taken, by the target corresponding to each second label information Multi-media Material in special effect processing information is successively added to the second associated image-region.
Step S409, if the target special effect processing information further includes image adjustment parameter, adjusts according to described image Parameter carries out image parameter adjustment to the destination image data, will add the mesh after the Multi-media Material and parameter adjustment Logo image data are determined as the target special efficacy view data, and show the target special efficacy view data.
Specifically, the first image-region in destination image data with the addition of the corresponding target special efficacy of the first label information Multi-media Material in processing information and, the second image-region in destination image data with the addition of the second label information pair After Multi-media Material in the target special effect processing information answered, if target special effect processing information further includes image adjustment parameter, The image parameter of destination image data is adjusted, image parameter can include brightness or saturation degree (if for example, identifying target figure As the target object in data includes face, then target special effect processing information can include the figure for being used for lightening destination image data As adjusting parameter).By the Multi-media Material that with the addition of in the corresponding target special effect processing information of the first label information, the second mark Multi-media Material in the corresponding target special effect processing information of label information and the target image after parameter adjustment are determined as target spy View data is imitated, and the target special efficacy view data is shown on the screen of terminal device.
Further, Fig. 5 a and Fig. 5 b are referred to, are the boundaries of another image processing method provided in an embodiment of the present invention Face schematic diagram.By in the destination image data input convolutional neural networks model in Fig. 5 a, obtained from convolutional neural networks model It is " building " to take with the matched first object label information of destination image data, and determines the first image-region to include building The image-region of thing characteristic image, that is to say the first image-region in Fig. 5 b.Obtained from the weather application in terminal device Auxiliary information corresponding with the destination image data is " heavy rain ", the corresponding second target labels letter of identification auxiliary information " heavy rain " Cease for " rain scape ", and determine that associated with auxiliary information " heavy rain " the second image-region in destination image data is comprising day The image-region of empty characteristic image, that is to say the second image-region in Fig. 5 b.In image special effect storehouse, search and the first label The corresponding target special effect processing information of information " building " is the flash of light picture for being added to destination image data, is searched and the The corresponding target special effect processing information of two label informations " rain scape " is the lightning picture for being added to destination image data.It will dodge Light picture is added to the first image-region in Fig. 5 b, and lightning picture is added to the second image-region in Fig. 5 b, obtains To target special efficacy view data as shown in Figure 5 b, and the target special efficacy view data is shown on the screen of terminal device.
The embodiment of the present invention inputs convolutional neural networks by obtaining destination image data, and by the destination image data In model;Using the classification feature of convolutional neural networks model, obtained and the target in the convolutional neural networks model The target labels information aggregate that view data matches, the label information in the target labels information aggregate are used to mark described The feature classification of destination image data;In image special effect storehouse, it is special to extract target corresponding with the target labels information aggregate Processing information is imitated, special effect processing is carried out to the destination image data according to the target special effect processing information, obtains target spy View data is imitated, and shows the target special efficacy view data.It can be seen from the above that entirely to destination image data special effect processing During, without manually participate in just can automatically recognition target image data feature classification, and automatically to destination image data Special effect processing is carried out, and then can be to avoid the tedious steps brought because carrying out image special effect processing manually, so as to improve image The efficiency of data processing.Multiple feature classifications in recognition target image data at the same time, and performance objective label information collection respectively Target special effect processing information corresponding to multiple label informations in conjunction, can enrich special effect processing mode, improve special effect processing Effect.
Further, Fig. 6 is referred to, is the flow signal of another image processing method provided in an embodiment of the present invention Figure.As shown in fig. 6, image processing method includes the following steps:
Step S501, obtains sample image data and sample label information corresponding with the sample image data, institute State the feature classification that sample label information is used to mark the sample image data;
Specifically, terminal device can download view data as sample image data from image data base, according to sample Picture material in this view data sets corresponding sample label information for each sample image data, or obtains The view data of sample label information is stamped for each sample image data, wherein sample label information is used for marker samples figure As the feature classification of data, sample label information can be that numeral can also be other characters with distinctive mark meaning.Example Such as, sample label information be " personage " either numerical value 1 when corresponding sample image data include man, woman, child or The feature classification such as old man;When sample label information is " animal " or numerical value 2, corresponding sample image data includes cat, old The feature classification such as mouse or dog;When sample label information is " building " or numerical value 3, wrapped in corresponding sample image data Include the feature classification such as Great Wall, the Forbidden City or Tower of London.
Step S502, according to the mapping relations between the sample image data and the sample label information, builds institute State convolutional neural networks model;
Specifically, sample image data all carries corresponding sample label information, identical sample label information is one kind, In units of class, the classification of a variety of sample image datas inputs the input layer in convolutional neural networks model respectively, builds convolution Neural network model, the parameter size of input layer are equal to the size of input sample view data.When sample image data input to After the input layer of convolutional neural networks, the fritter in sample image data is randomly selected first as sample, and it is small from this Then sample learning slips over all pictures of sample image data to some features successively by the use of this sample as a window Plain region, that is to say, that from sample learning to feature do convolution with sample image data, so as to obtain sample image data Most significant feature on diverse location.After convolution algorithm is finished, the feature of sample image data has been extracted, but has only been led to It is big to cross the feature quantity of convolution algorithm extraction, in order to reduce calculation amount, also needs to carry out pond computing, that is, carry convolution algorithm The feature taken carries out aggregate statistics, and the order of magnitude of these statistical natures will be well below the quantity for the feature that convolution algorithm extracts Level, while classifying quality can be also improved, common pond method mainly includes average pond operation method and maximum pond computing Method.Average pond computing is that the feature that an average characteristics represent this feature set is calculated in a characteristic set; Maximum pond computing is that the feature that maximum feature represents this feature set is extracted in a characteristic set.Transported by convolution Calculation and pond computing, can extract the most significant sample characteristics of sample image data, while the quantity of the sample characteristics is few.
It is a grader in the output layer of convolutional neural networks, because the sample image data of input is to carry sample Label information, and using identical sample label information as a kind of structure convolutional neural networks model, therefore point of the grader Class nodal point number is identical with the classification number of sample label information, while the classification number of sample label information and the spy of sample image data It is also identical to levy classification number, that is to say, that how many classification of the sample label information of input completes convolutional neural networks in structure The classification node with regard to how many of the output layer of model.For example, sample label information for " personage " sample image data have A1, A2, A3, sample label information have B1, B2, B3 for the sample image data of " animal ", and sample label information is the sample of " building " This view data has C1, C2, C3, according to sample image data A1, A2, the A3 for carrying sample label information, B1, B2, B3, C1, C2, C3, the convolutional neural networks model built by convolution algorithm and pond computing, the convolutional neural networks model it is defeated Going out the grader of layer has 3 nodes, corresponds to sample label information " personage ", " animal " and " building ", while 3 nodes respectively Also corresponding 3 characteristic sets T1, T2, T3, are characterized in extracting from sample image data A1, A2, A3 in characteristic set T1 The feature come;It is characterized in the feature extracted from sample image data B1, B2, B3 in characteristic set T2;Characteristic set It is characterized in the feature extracted from sample image data C1, C2, C3 in T3.
Step S503, obtains destination image data, and the destination image data is inputted in convolutional neural networks model;
Step S504, obtains the target mark to match with the destination image data in the convolutional neural networks model Information aggregate is signed, the label information in the target labels information aggregate is used for the feature class for marking the destination image data Not;
Step S505, in image special effect storehouse, extracts target special effect processing corresponding with the target labels information aggregate Information, carries out special effect processing to the destination image data according to the target special effect processing information, obtains target special effect graph picture Data, and show the target special efficacy view data;
Wherein, the specific implementation process of step S503- steps S505 can be found in embodiment corresponding to above-mentioned Fig. 2 to step The description of S101- steps S103, will not continue to be repeated here.
Step S506, if getting special efficacy switching command, in described image special efficacy storehouse, randomly selects special effect processing letter Breath, carries out the target special efficacy view data as random special effect processing information, and according to the random special effect processing information Special efficacy renewal is handled, the target special efficacy view data after being updated;
Specifically, the target special efficacy view data that user is generated by screen preview, if being unsatisfied with above-mentioned target special effect graph As data, then " random " button is clicked on, to generate special efficacy switching command.Therefore, if terminal device gets special efficacy switching command, Target special effect processing information is then randomly selected from image special effect storehouse as random targets special effect processing information, and according to it is above-mentioned with Machine target special effect processing information carries out special effect processing, the target special efficacy picture number after being updated to target special efficacy view data According to, and the target special efficacy view data after renewal is shown on the screen of terminal device.Target special effect graph picture after generation renewal The detailed process of data can be found in the step S403- step S409 in step S303- steps S305 or Fig. 5 in Fig. 4, herein not Repeat again.
Wherein, after target special efficacy view data is generated, if user clicks directly on " preservation " button, can illustrate from volume The target labels information aggregate of the correspondence destination image data obtained in product neural network model can be with accurate identification target image The feature classification (expectation for meeting user) of data, therefore can be by destination image data and corresponding target labels information collection Label information in conjunction is inputted into convolutional neural networks model, for holding as sample image data and sample label information The continuous parameter for optimizing the convolutional neural networks model, more accurately target labels information collection is generated for next destination image data Close, to improve the accuracy rate of the image recognition of above-mentioned convolutional neural networks model.
The embodiment of the present invention inputs convolutional neural networks by obtaining destination image data, and by the destination image data In model;Using the classification feature of convolutional neural networks model, obtained and the target in the convolutional neural networks model The target labels information aggregate that view data matches, the label information in the target labels information aggregate are used to mark described The feature classification of destination image data;In image special effect storehouse, it is special to extract target corresponding with the target labels information aggregate Processing information is imitated, special effect processing is carried out to the destination image data according to the target special effect processing information, obtains target spy View data is imitated, and shows the target special efficacy view data.It can be seen from the above that entirely to destination image data special effect processing During, without manually participate in just can automatically recognition target image data feature classification, and automatically to destination image data Special effect processing is carried out, and then can be to avoid the tedious steps brought because carrying out image special effect processing manually, so as to improve image The efficiency of data processing.Multiple feature classifications in recognition target image data at the same time, and performance objective label information collection respectively Target special effect processing information corresponding to multiple label informations in conjunction, can enrich special effect processing mode, improve special effect processing Effect.
Further, Fig. 7 is referred to, is a kind of structure diagram of image processing apparatus provided in an embodiment of the present invention, As shown in fig. 7, described image processing unit 1 can be applied to the terminal device in embodiment corresponding to above-mentioned Fig. 2, described image Processing unit 1 can include:First input module 10, target labels acquisition module 20 and special efficacy processing module 30;
First input module 10, convolutional Neural is inputted for obtaining destination image data, and by the destination image data In network model;
Target labels acquisition module 20, for being obtained and the destination image data in the convolutional neural networks model The target labels information aggregate to match, the label information in the target labels information aggregate are used to mark the target image The feature classification of data;
Special efficacy processing module 30, in image special effect storehouse, extracting mesh corresponding with the target labels information aggregate Special effect processing information is marked, special effect processing is carried out to the destination image data according to the target special effect processing information, obtains mesh Special efficacy view data is marked, and shows the target special efficacy view data.
Wherein, the concrete function of first input module 10, target labels acquisition module 20 and special efficacy processing module 30 Implementation may refer to above-mentioned Fig. 2 and correspond to step S101- step S103 in embodiment, be not discussed here.
It can also include please also refer to Fig. 7, described image processing unit 1 again:Sample label acquisition module 40, builds mould Block 50, auxiliary information acquisition module 60, the second label acquisition module 70, randomly selects module 80;
Sample label acquisition module 40, for obtaining sample image data and sample corresponding with the sample image data This label information, the sample label information are used for the feature classification for marking the sample image data;
Module 50 is built, for according to the mapping relations between the sample image data and the sample label information, Build the convolutional neural networks model;
Auxiliary information acquisition module 60, for obtaining the corresponding auxiliary information of the destination image data;
Second label acquisition module 70, for obtaining the second label information to match with the auxiliary information, and by institute State the second label information and be added to the target labels information aggregate;
Module 80 is randomly selected, if for getting special efficacy switching command, in described image special efficacy storehouse, is randomly selected Target special effect processing information, as random targets special effect processing information, and according to the random targets special effect processing information to institute State target special efficacy view data and carry out special efficacy renewal processing, the target special efficacy view data after being updated.
Wherein, the sample label acquisition module 40, structure module 50 concrete function implementation may refer to it is above-mentioned Fig. 6 corresponds to the step S501- steps S502 in embodiment;Auxiliary information acquisition module 60, the tool of the second label acquisition module 70 Body function implementation may refer to above-mentioned Fig. 5 and correspond to step S404- steps S405 in embodiment;Randomly select module 80 Concrete function implementation may refer to above-mentioned Fig. 6 and correspond to step S506 in embodiment, be not discussed here.
Further, as shown in fig. 7, the target labels acquisition module 20 includes:Image segmentation unit 201, first is marked Sign acquiring unit 202;
Image segmentation unit 201, for identifying the objects' contour in the destination image data, and according to the mesh The destination image data is divided at least one unit destination image data by mark contour of object;
First label acquiring unit 202, for the unit destination image data to be inputted the convolutional neural networks mould In type, the first label information to match with the unit destination image data is obtained in the convolutional neural networks model, And first label information is added to the target labels information aggregate.
Wherein, described image cutting unit 201, the concrete function implementation of the first label acquiring unit 202 can join See that above-mentioned Fig. 3 corresponds to the S201-S202 in embodiment, be not discussed here.
Further, as shown in fig. 7, the special efficacy processing module 30 includes:The addition of first extraction unit 301, first is single Member 302, display unit 303, the second extraction unit 304, the second adding device 305;
First extraction unit 301, in described image special efficacy storehouse, extraction with the target labels information aggregate The corresponding target special effect processing information of first label information;
First adding device 302, for searching the first image-region where the unit destination image data, and by institute The Multi-media Material stated in the corresponding target special effect processing information of the first label information is added to described first image region;
Display unit 303, it is special for the destination image data for having added the Multi-media Material to be determined as the target View data is imitated, and shows the target special efficacy view data;
Display unit 303, if image adjustment parameter is further included specifically for the target special effect processing information, according to institute State image adjustment parameter and image parameter adjustment is carried out to the destination image data, the Multi-media Material and parameter will have been added Destination image data after adjustment is determined as the target special efficacy view data, and shows the target special efficacy view data.
Second extraction unit 304, in described image special efficacy storehouse, extraction with the target labels information aggregate The corresponding target special effect processing information of second label information;
Second adding device 305, for identifying associated with the auxiliary information second in the destination image data Image-region, and the Multi-media Material in the corresponding target special effect processing information of second label information is added to described second Image-region.
Wherein, first extraction unit 301, the first adding device 302, display unit 303, the second extraction unit 304, The concrete function implementation of second adding device 305 may refer to above-mentioned Fig. 4 and correspond to step S303- steps in embodiment S305, and above-mentioned Fig. 5 correspond to the step S406- step S409 in embodiment, are not discussed here.
Further, Fig. 8 is referred to, is a kind of structural representation of first label acquiring unit provided in an embodiment of the present invention Figure, as shown in figure 8, the first label acquiring unit 202 includes:Second input subelement 2021, feature extraction subelement 2022nd, comparing subunit 2023, determination subelement 2024;
Second input subelement 2021, for the unit destination image data to be inputted the convolutional neural networks model In;
Specifically, in order to improve the accuracy rate of image recognition, the second input subelement 2021 can be by the unit after segmentation Destination image data is adjusted to fixed size, and the unit destination image data after adjustment size then is inputted convolutional Neural Input layer in network model.Second input subelement 2021 can roll up the unit destination image data stochastic inputs after segmentation In product neural network model, convolutional neural networks can also be sequentially input according to the sequencing for obtaining unit destination image data In model.It is understood that above-mentioned convolutional neural networks complete in advance by structure, the parameter size of the input layer, which is equal to, to be adjusted The size of unit destination image data after whole size.
Feature extraction subelement 2022, for by convolution algorithm and pond computing, extracting the unit target image number According to target signature;
Specifically, when the second input subelement 2021 inputs unit destination image data to the output of convolutional neural networks After layer, convolutional layer is subsequently entered, feature extraction subelement 2022 first randomly selects the fritter in unit destination image data As sample, and from this small sample learning to some features, then slipped over successively by the use of this sample as a window The all pixels region of unit destination image data, that is to say, that from sample learning to feature with unit target image number According to convolution is done, so as to obtain most significant feature on unit destination image data diverse location.After convolution algorithm is finished, feature Extraction subelement 2022 has extracted the feature of unit destination image data, but the characteristic extracted only by convolution algorithm Amount is big, in order to reduce calculation amount, also needs to carry out pond computing, that is, will pass through convolution algorithm from unit destination image data The feature of extraction is transmitted to pond layer, carries out aggregate statistics to the feature of extraction, the order of magnitude of these statistical natures is much low In the order of magnitude for the feature that convolution algorithm extracts, while classifying quality can be also improved, common pond method mainly includes flat Equal pond operation method and maximum pond operation method.Average pond operation method is that one is calculated in a characteristic set Average characteristics represent the feature of this feature set;Maximum pond computing is to extract maximum feature in a characteristic set to represent The feature of this feature set.By convolution algorithm and pond computing, the most significant mesh of unit destination image data can be extracted Feature is marked, while the quantity of the target signature is few.It is worth noting that, the convolutional layer in convolutional neural networks can only have one layer There can also be multilayer, similarly pond layer can only have one layer there can also be multilayer.
Comparing subunit 2023, for by the characteristic set in the target signature and the convolutional neural networks model into Row similarity-rough set;
Determination subelement 2024, in the characteristic set, obtaining the label corresponding to the feature of similarity maximum Information is as the first label information to match with the unit destination image data.
Wherein, the second input subelement 2021, feature extraction subelement 2022, comparing subunit 2023, definite son The concrete function implementation of unit 2024 may refer to above-mentioned Fig. 3 and correspond to S202-S205 in embodiment, no longer carry out here Repeat.
The embodiment of the present invention inputs convolutional neural networks by obtaining destination image data, and by the destination image data In model;Using the classification feature of convolutional neural networks model, obtained and the target in the convolutional neural networks model The target labels information aggregate that view data matches, the label information in the target labels information aggregate are used to mark described The feature classification of destination image data;In image special effect storehouse, it is special to extract target corresponding with the target labels information aggregate Processing information is imitated, special effect processing is carried out to the destination image data according to the target special effect processing information, obtains target spy View data is imitated, and shows the target special efficacy view data.It can be seen from the above that entirely to destination image data special effect processing During, without manually participate in just can automatically recognition target image data feature classification, and automatically to destination image data Special effect processing is carried out, and then can be to avoid the tedious steps brought because carrying out image special effect processing manually, so as to improve image The efficiency of data processing.Multiple feature classifications in recognition target image data at the same time, and performance objective label information collection respectively Target special effect processing information corresponding to multiple label informations in conjunction, can enrich special effect processing mode, improve special effect processing Effect.
Fig. 9 is referred to, is the structure diagram of another image processing apparatus provided in an embodiment of the present invention.Such as Fig. 9 institutes Show, described image processing unit 1000 can be applied to above-mentioned Fig. 2 and correspond to terminal device in embodiment, described image processing dress Putting 1000 can include:Processor 1001, network interface 1004 and memory 1005, in addition, described image processing unit 1000 It can also include:User interface 1003, and at least one communication bus 1002.Wherein, communication bus 1002 is used for realization these Connection communication between component.Wherein, user interface 1003 can include display screen (Display), keyboard (Keyboard), can User interface 1003 is selected to include standard wireline interface and wireless interface.Network interface 1004 can optionally include mark Wireline interface, the wave point (such as WI-FI interfaces) of standard.Memory 1004 can be high-speed RAM memory or it is non-not Stable memory (non-volatile memory), for example, at least a magnetic disk storage.Memory 1004 optionally may be used also To be at least one storage device for being located remotely from aforementioned processor 1001.As shown in figure 9, as a kind of computer-readable storage medium Memory 1004 in can include operating system, network communication module, Subscriber Interface Module SIM and equipment control application program.
In the image processing apparatus 1000 shown in Fig. 9, network interface 1004 can provide network communication function;And user connects Mouth 1003 is mainly used for providing to the user the interface of input;And processor 1001 can be used for calling what is stored in memory 1004 Equipment controls application program, to realize:
Destination image data is obtained, and the destination image data is inputted in convolutional neural networks model;
The target labels information collection to match with the destination image data is obtained in the convolutional neural networks model Close, the label information in the target labels information aggregate is used for the feature classification for marking the destination image data;
In image special effect storehouse, target special effect processing information corresponding with the target labels information aggregate is extracted, according to The target special effect processing information carries out special effect processing to the destination image data, obtains target special efficacy view data, and show Show the target special efficacy view data.
In one embodiment, the processor 1001 perform described in the convolutional neural networks model and obtain with During the target labels information aggregate that the destination image data matches, following steps are specifically performed:
Identify the objects' contour in the destination image data, and according to the objects' contour by the target View data is divided at least one unit destination image data;
The unit destination image data is inputted in the convolutional neural networks model, in the convolutional neural networks mould The first label information to match with the unit destination image data is obtained in type, and first label information is added to The target labels information aggregate.
In one embodiment, the processor 1001 is described by described in unit destination image data input in execution In convolutional neural networks model, obtain what is matched with the unit destination image data in the convolutional neural networks model During the first label information, following steps are specifically performed:
The unit destination image data is inputted in the convolutional neural networks model;
Specifically, in order to improve the accuracy rate of image recognition, the unit destination image data after segmentation can be adjusted to Fixed size, then by the input in the unit destination image data input convolutional neural networks model after adjustment size Layer.Can by the unit destination image data stochastic inputs convolutional neural networks model after segmentation can also according to obtain unit The sequencing of destination image data is sequentially input in convolutional neural networks model.It is understood that above-mentioned convolutional Neural net Network complete in advance by structure, and the parameter size of the input layer is equal to the size of the unit destination image data after adjusting size.
By convolution algorithm and pond computing, the target signature of the unit destination image data is extracted;
Specifically, after unit destination image data is inputted to the output layer of convolutional neural networks, convolutional layer is subsequently entered, The fritter in unit destination image data is randomly selected first as sample, and it is special from this small sample learning to some Sign, then slips over all pixels region of unit destination image data successively by the use of this sample as a window, that is, Say, from sample learning to feature do convolution with unit destination image data so that obtain unit destination image data difference Most significant feature on position.After convolution algorithm is finished, the feature of unit destination image data has been extracted, but has only been led to It is big to cross the feature quantity of convolution algorithm extraction, in order to reduce calculation amount, also needs to carry out pond computing, that is, will be from unit target The feature extracted in view data by convolution algorithm is transmitted to pond layer, carries out aggregate statistics to the feature of extraction, these systems The order of magnitude for counting feature will be well below the order of magnitude for the feature that convolution algorithm extracts, while can also improve classifying quality, often Pond method mainly includes average pond operation method and maximum pond operation method.Average pond operation method is one The feature that an average characteristics represent this feature set is calculated in a characteristic set;Maximum pond computing is in a feature set The feature that maximum feature represents this feature set is extracted in conjunction.By convolution algorithm and pond computing, unit can be extracted The most significant target signature of destination image data, while the quantity of the target signature is few.It is worth noting that, convolutional neural networks In convolutional layer can only have one layer can also have multilayer, similarly pond layer can only have one layer can also have multilayer.
Characteristic set in the target signature and the convolutional neural networks model is subjected to similarity-rough set;
Specifically, the output layer of trained convolutional neural networks model is a grader, the node of the grader Number is identical with the classification number of the sample label information, also consistent with the feature classification number of sample image data, the grader Each classification node includes the feature extracted from the sample image data corresponding to each sample label information category Set, how many sample label information category is with regard to how many characteristic set.When the target of extraction unit destination image data After feature, the characteristic set of the output layer in above-mentioned target signature and convolutional neural networks model is subjected to similarity-rough set, phase It can illustrate that the similarity of two features is bigger, distance by the way of distance metric, if the distance of two features is nearer like degree Metric form can include Euclidean distance metric form, mahalanobis distance metric form or Hamming distance metric form.
In the characteristic set, obtain similarity maximum feature corresponding to label information as with the unit mesh First label information of logo image data match.
In one embodiment, the processor 1001 is performing described in image special effect storehouse, extraction and the target The corresponding target special effect processing information of label information set, according to the target special effect processing information to the destination image data Special effect processing is carried out, it is specific to perform following step when obtaining target special efficacy view data, and showing the target special efficacy view data Suddenly:
In described image special efficacy storehouse, extraction is corresponding with first label information in the target labels information aggregate Target special effect processing information;
The first image-region where the unit destination image data is searched, and first label information is corresponding Multi-media Material in target special effect processing information is added to described first image region;
The destination image data for having added the Multi-media Material is determined as the target special efficacy view data, and is shown The target special efficacy view data.
In one embodiment, the processor 1001 also performs following steps:
Obtain the corresponding auxiliary information of the destination image data;
The second label information to match with the auxiliary information is obtained, and by second label information described in Target labels information aggregate;
The auxiliary information includes at least one in environmental parameter, equipment status parameter and image remark information keyword It is a;
Then the destination image data for having added the Multi-media Material described is determined as performing by the processor 1001 The target special efficacy view data, and the step of show the target special efficacy view data before, also perform following steps:
In described image special efficacy storehouse, extraction is corresponding with second label information in the target labels information aggregate Target special effect processing information;
Identify the second image-region associated with the auxiliary information in the destination image data, and by described second Multi-media Material in the corresponding target special effect processing information of label information adds to second image-region.
In one embodiment, the processor 1001 is performing the target figure that will add the Multi-media Material It is specific to perform following step as data are determined as the target special efficacy view data, and when showing the target special efficacy view data Suddenly:
If the target special effect processing information further includes image adjustment parameter, according to described image adjusting parameter to described Destination image data carries out image parameter adjustment, will add the destination image data after the Multi-media Material and parameter adjustment It is determined as the target special efficacy view data, and shows the target special efficacy view data.
In one embodiment, the processor 1001 also performs following steps:
If getting special efficacy switching command, in described image special efficacy storehouse, target special effect processing information is randomly selected, is made For random targets special effect processing information, and according to the random targets special effect processing information to the target special efficacy view data into The renewal of row special efficacy is handled, the target special efficacy view data after being updated.
In one embodiment, the processor 1001 also performs following steps:
Obtain sample image data and sample label information corresponding with the sample image data, the sample label Information is used for the feature classification for marking the sample image data;
According to the mapping relations between the sample image data and the sample label information, the convolutional Neural is built Network model.
The embodiment of the present invention inputs convolutional neural networks by obtaining destination image data, and by the destination image data In model;Using the classification feature of convolutional neural networks model, obtained and the target in the convolutional neural networks model The target labels information aggregate that view data matches, the label information in the target labels information aggregate are used to mark described The feature classification of destination image data;In image special effect storehouse, it is special to extract target corresponding with the target labels information aggregate Processing information is imitated, special effect processing is carried out to the destination image data according to the target special effect processing information, obtains target spy View data is imitated, and shows the target special efficacy view data.It can be seen from the above that entirely to destination image data special effect processing During, without manually participate in just can automatically recognition target image data feature classification, and automatically to destination image data Special effect processing is carried out, and then can be to avoid the tedious steps brought because carrying out image special effect processing manually, so as to improve image The efficiency of data processing.Multiple feature classifications in recognition target image data at the same time, and performance objective label information collection respectively Target special effect processing information corresponding to multiple label informations in conjunction, can enrich special effect processing mode, improve special effect processing Effect.
It should be appreciated that executable Fig. 2 to Fig. 6 above of image processing apparatus 1000 described in the embodiment of the present invention is any Description in a corresponding embodiment to described image processing method, it is right in embodiment corresponding to Fig. 7 or Fig. 8 above also to can perform The description of described image processing unit 1, details are not described herein.In addition, to being described using the beneficial effect of same procedure, also no longer Repeated.
In addition, it need to be noted that be:The embodiment of the present invention additionally provides a kind of computer-readable storage medium, and the meter The computer program performed by the image processing apparatus 1 being mentioned above, and the computer journey are stored with calculation machine storage medium Sequence includes programmed instruction, and when the processor performs described program instruction, it is right to be able to carry out any one institute of Fig. 2 to Fig. 6 above Answer the description to described image processing method in embodiment, therefore, will no longer be repeated here.In addition, to using phase Tongfang The beneficial effect description of method, is also no longer repeated.For not draped over one's shoulders in computer-readable storage medium embodiment according to the present invention The ins and outs of dew, refer to the description of the method for the present invention embodiment.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a computer read/write memory medium In, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
The above disclosure is only the preferred embodiments of the present invention, cannot limit the right model of the present invention with this certainly Enclose, therefore equivalent variations made according to the claims of the present invention, it is still within the scope of the present invention.

Claims (15)

  1. A kind of 1. image processing method, it is characterised in that including:
    Destination image data is obtained, and the destination image data is inputted in convolutional neural networks model;
    The target labels information aggregate to match with the destination image data, institute are obtained in the convolutional neural networks model State the feature classification that the label information in target labels information aggregate is used to mark the destination image data;
    In image special effect storehouse, target special effect processing information corresponding with the target labels information aggregate is extracted, according to described Target special effect processing information carries out special effect processing to the destination image data, obtains target special efficacy view data, and show institute State target special efficacy view data.
  2. 2. according to the method described in claim 1, it is characterized in that, acquisition and the institute in the convolutional neural networks model The target labels information aggregate that destination image data matches is stated, including:
    Identify the objects' contour in the destination image data, and according to the objects' contour by the target image Data are divided at least one unit destination image data;
    The unit destination image data is inputted in the convolutional neural networks model, in the convolutional neural networks model The first label information to match with the unit destination image data is obtained, and by first label information described in Target labels information aggregate.
  3. 3. according to the method described in claim 2, it is characterized in that, described input the volume by the unit destination image data In product neural network model, the to match with the unit destination image data is obtained in the convolutional neural networks model One label information, including:
    The unit destination image data is inputted in the convolutional neural networks model;
    By convolution algorithm and pond computing, the target signature of the unit destination image data is extracted;
    Characteristic set in the target signature and the convolutional neural networks model is subjected to similarity-rough set;
    In the characteristic set, obtain similarity maximum feature corresponding to label information as with the unit target figure As the first label information of data match.
  4. 4. according to the method in claim 2 or 3, it is characterised in that described in image special effect storehouse, extraction and the target The corresponding target special effect processing information of label information set, according to the target special effect processing information to the destination image data Special effect processing is carried out, obtains target special efficacy view data, and shows the target special efficacy view data, including:
    In described image special efficacy storehouse, mesh corresponding with first label information in the target labels information aggregate is extracted Mark special effect processing information;
    Search the first image-region where the unit destination image data, and by the corresponding target of first label information Multi-media Material in special effect processing information is added to described first image region;
    The destination image data for having added the Multi-media Material is determined as the target special efficacy view data, and described in display Target special efficacy view data.
  5. 5. according to the method described in claim 4, it is characterized in that, further include:
    Obtain the corresponding auxiliary information of the destination image data;
    The second label information to match with the auxiliary information is obtained, and second label information is added to the target Label information set;
    The auxiliary information includes at least one in environmental parameter, equipment status parameter and image remark information keyword;
    It is then described that the destination image data for having added the Multi-media Material is determined as the target special efficacy view data, and show Before the step of showing the target special efficacy view data, further include:
    In described image special efficacy storehouse, mesh corresponding with second label information in the target labels information aggregate is extracted Mark special effect processing information;
    Identify the second image-region associated with the auxiliary information in the destination image data, and by second label Multi-media Material in the corresponding target special effect processing information of information adds to second image-region.
  6. 6. the according to the method described in claim 4, it is characterized in that, target image that the Multi-media Material will have been added Data are determined as the target special efficacy view data, and show the target special efficacy view data, including:
    If the target special effect processing information further includes image adjustment parameter, according to described image adjusting parameter to the target View data carries out image parameter adjustment, will add the destination image data after the Multi-media Material and parameter adjustment and has determined For the target special efficacy view data, and show the target special efficacy view data.
  7. 7. according to the method described in claim 1, it is characterized in that, further include:
    If getting special efficacy switching command, in described image special efficacy storehouse, special effect processing information is randomly selected, as random spy Processing information is imitated, and special efficacy renewal processing is carried out to the target special efficacy view data according to the random special effect processing information, Target special efficacy view data after being updated.
  8. 8. according to the method described in claim 1, it is characterized in that, further include:
    Obtain sample image data and sample label information corresponding with the sample image data, the sample label information For marking the feature classification of the sample image data;
    According to the mapping relations between the sample image data and the sample label information, the convolutional neural networks are built Model.
  9. A kind of 9. image processing apparatus, it is characterised in that including:
    First input module, convolutional neural networks mould is inputted for obtaining destination image data, and by the destination image data In type;
    Target labels acquisition module, matches for being obtained in the convolutional neural networks model with the destination image data Target labels information aggregate, the label information in the target labels information aggregate is used to mark the destination image data Feature classification;
    Special efficacy processing module, in image special effect storehouse, extracting target special efficacy corresponding with the target labels information aggregate Processing information, carries out special effect processing to the destination image data according to the target special effect processing information, obtains target special efficacy View data, and show the target special efficacy view data.
  10. 10. device according to claim 9, it is characterised in that the target labels acquisition module, including:
    Image segmentation unit, for identifying the objects' contour in the destination image data, and according to the target object The destination image data is divided at least one unit destination image data by profile;
    First label acquiring unit, for the unit destination image data to be inputted in the convolutional neural networks model, The first label information to match with the unit destination image data is obtained in the convolutional neural networks model, and by described in First label information is added to the target labels information aggregate.
  11. 11. device according to claim 10, it is characterised in that the first label acquiring unit, including:
    Second input subelement, for the unit destination image data to be inputted in the convolutional neural networks model;
    Feature extraction subelement, for by convolution algorithm and pond computing, extracting the target of the unit destination image data Feature;
    Comparing subunit, for the characteristic set in the target signature and the convolutional neural networks model to be carried out similarity Compare;
    Determination subelement, in the characteristic set, obtaining the label information conduct corresponding to the feature of similarity maximum The first label information to match with the unit destination image data.
  12. 12. according to the devices described in claim 11, it is characterised in that the special efficacy processing module includes:
    First extraction unit, in described image special efficacy storehouse, extraction and described the in the target labels information aggregate The corresponding target special effect processing information of one label information;
    First adding device, for searching the first image-region where the unit destination image data, and by described first Multi-media Material in the corresponding target special effect processing information of label information is added to described first image region;
    Display unit, for the destination image data for having added the Multi-media Material to be determined as the target special efficacy picture number According to, and show the target special efficacy view data.
  13. 13. device according to claim 12, it is characterised in that further include:
    Auxiliary information acquisition module, for obtaining the corresponding auxiliary information of the destination image data;
    Second label acquisition module, for obtaining the second label information to match with the auxiliary information, and by described second Label information is added to the target labels information aggregate;
    The auxiliary information includes at least one in environmental parameter, equipment status parameter and image remark information keyword;
    Then the special efficacy processing module further includes:
    Second extraction unit, in described image special efficacy storehouse, extraction and described the in the target labels information aggregate The corresponding target special effect processing information of two label informations;
    Second adding device, for identifying the second image district associated with the auxiliary information in the destination image data Domain, and the Multi-media Material in the corresponding target special effect processing information of second label information is added to second image district Domain.
  14. A kind of 14. image processing apparatus, it is characterised in that including:Processor and memory;
    The processor is connected with memory, wherein, the memory is used for store program codes, and the processor is used to call Said program code, to perform such as claim 1-8 any one of them methods.
  15. 15. a kind of computer-readable storage medium, it is characterised in that the computer-readable storage medium is stored with computer program, described Computer program includes programmed instruction, and described program is instructed when being executed by a processor, performed such as any one of claim 1-8 institutes The method stated.
CN201711243948.XA 2017-11-30 2017-11-30 Image processing method and device Active CN107993191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711243948.XA CN107993191B (en) 2017-11-30 2017-11-30 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711243948.XA CN107993191B (en) 2017-11-30 2017-11-30 Image processing method and device

Publications (2)

Publication Number Publication Date
CN107993191A true CN107993191A (en) 2018-05-04
CN107993191B CN107993191B (en) 2023-03-21

Family

ID=62034835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711243948.XA Active CN107993191B (en) 2017-11-30 2017-11-30 Image processing method and device

Country Status (1)

Country Link
CN (1) CN107993191B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN108765278A (en) * 2018-06-05 2018-11-06 Oppo广东移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN108805838A (en) * 2018-06-05 2018-11-13 Oppo广东移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN108846351A (en) * 2018-06-08 2018-11-20 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109102484A (en) * 2018-08-03 2018-12-28 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN109151318A (en) * 2018-09-28 2019-01-04 成都西纬科技有限公司 A kind of image processing method, device and computer storage medium
CN109167936A (en) * 2018-10-29 2019-01-08 Oppo广东移动通信有限公司 A kind of image processing method, terminal and storage medium
CN109325443A (en) * 2018-09-19 2019-02-12 南京航空航天大学 A kind of face character recognition methods based on the study of more example multi-tag depth migrations
CN109389660A (en) * 2018-09-28 2019-02-26 百度在线网络技术(北京)有限公司 Image generating method and device
CN109525872A (en) * 2018-09-10 2019-03-26 杭州联驱科技有限公司 Display screen prebrowsing system and method for previewing
CN109710255A (en) * 2018-12-24 2019-05-03 网易(杭州)网络有限公司 Effect processing method, special effect processing device, electronic equipment and storage medium
CN110008922A (en) * 2019-04-12 2019-07-12 腾讯科技(深圳)有限公司 Image processing method, unit, medium for terminal device
CN110035227A (en) * 2019-03-25 2019-07-19 维沃移动通信有限公司 Special effect display methods and terminal device
CN110163810A (en) * 2019-04-08 2019-08-23 腾讯科技(深圳)有限公司 A kind of image processing method, device and terminal
CN110377768A (en) * 2019-06-10 2019-10-25 万翼科技有限公司 It is a kind of intelligently to know drawing system and method
WO2019242329A1 (en) * 2018-06-20 2019-12-26 北京七鑫易维信息技术有限公司 Convolutional neural network training method and device
CN110708594A (en) * 2019-09-26 2020-01-17 三星电子(中国)研发中心 Content image generation method and system
CN110807728A (en) * 2019-10-14 2020-02-18 北京字节跳动网络技术有限公司 Object display method and device, electronic equipment and computer-readable storage medium
CN111107259A (en) * 2018-10-25 2020-05-05 阿里巴巴集团控股有限公司 Image acquisition method and device and electronic equipment
CN111368127A (en) * 2020-03-06 2020-07-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111626258A (en) * 2020-06-03 2020-09-04 上海商汤智能科技有限公司 Sign-in information display method and device, computer equipment and storage medium
CN111667562A (en) * 2020-05-07 2020-09-15 深圳思为科技有限公司 Dynamic effect interface generation method and device based on picture materials
CN112116690A (en) * 2019-06-19 2020-12-22 腾讯科技(深圳)有限公司 Video special effect generation method and device and terminal
CN112884909A (en) * 2021-02-23 2021-06-01 浙江商汤科技开发有限公司 AR special effect display method and device, computer equipment and storage medium
CN112927349A (en) * 2021-02-22 2021-06-08 北京市商汤科技开发有限公司 Three-dimensional virtual special effect generation method and device, computer equipment and storage medium
CN113139893A (en) * 2020-01-20 2021-07-20 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN113473019A (en) * 2021-07-01 2021-10-01 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN113473017A (en) * 2021-07-01 2021-10-01 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN113873168A (en) * 2021-10-27 2021-12-31 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium
CN114416260A (en) * 2022-01-20 2022-04-29 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008059006A (en) * 2006-08-29 2008-03-13 Dainippon Printing Co Ltd Image synthesizing apparatus, program, recording medium
CN103810504A (en) * 2014-01-14 2014-05-21 三星电子(中国)研发中心 Image processing method and device
CN105049959A (en) * 2015-07-08 2015-11-11 腾讯科技(深圳)有限公司 Multimedia file playing method and device
CN107025457A (en) * 2017-03-29 2017-08-08 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107220667A (en) * 2017-05-24 2017-09-29 北京小米移动软件有限公司 Image classification method, device and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008059006A (en) * 2006-08-29 2008-03-13 Dainippon Printing Co Ltd Image synthesizing apparatus, program, recording medium
CN103810504A (en) * 2014-01-14 2014-05-21 三星电子(中国)研发中心 Image processing method and device
CN105049959A (en) * 2015-07-08 2015-11-11 腾讯科技(深圳)有限公司 Multimedia file playing method and device
CN107025457A (en) * 2017-03-29 2017-08-08 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107220667A (en) * 2017-05-24 2017-09-29 北京小米移动软件有限公司 Image classification method, device and computer-readable recording medium

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805838B (en) * 2018-06-05 2021-03-02 Oppo广东移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN108765278A (en) * 2018-06-05 2018-11-06 Oppo广东移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN108805838A (en) * 2018-06-05 2018-11-13 Oppo广东移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN108765278B (en) * 2018-06-05 2023-04-07 Oppo广东移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN108764370B (en) * 2018-06-08 2021-03-12 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN108846351A (en) * 2018-06-08 2018-11-20 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
WO2019242329A1 (en) * 2018-06-20 2019-12-26 北京七鑫易维信息技术有限公司 Convolutional neural network training method and device
CN109102484A (en) * 2018-08-03 2018-12-28 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN109102484B (en) * 2018-08-03 2021-08-10 北京字节跳动网络技术有限公司 Method and apparatus for processing image
CN109525872A (en) * 2018-09-10 2019-03-26 杭州联驱科技有限公司 Display screen prebrowsing system and method for previewing
CN109525872B (en) * 2018-09-10 2022-01-04 杭州芯讯科技有限公司 Display screen preview system and preview method
CN109325443A (en) * 2018-09-19 2019-02-12 南京航空航天大学 A kind of face character recognition methods based on the study of more example multi-tag depth migrations
CN109151318A (en) * 2018-09-28 2019-01-04 成都西纬科技有限公司 A kind of image processing method, device and computer storage medium
CN109389660A (en) * 2018-09-28 2019-02-26 百度在线网络技术(北京)有限公司 Image generating method and device
CN111107259A (en) * 2018-10-25 2020-05-05 阿里巴巴集团控股有限公司 Image acquisition method and device and electronic equipment
CN109167936A (en) * 2018-10-29 2019-01-08 Oppo广东移动通信有限公司 A kind of image processing method, terminal and storage medium
CN109710255A (en) * 2018-12-24 2019-05-03 网易(杭州)网络有限公司 Effect processing method, special effect processing device, electronic equipment and storage medium
CN110035227A (en) * 2019-03-25 2019-07-19 维沃移动通信有限公司 Special effect display methods and terminal device
CN110163810B (en) * 2019-04-08 2023-04-25 腾讯科技(深圳)有限公司 Image processing method, device and terminal
CN110163810A (en) * 2019-04-08 2019-08-23 腾讯科技(深圳)有限公司 A kind of image processing method, device and terminal
CN110008922A (en) * 2019-04-12 2019-07-12 腾讯科技(深圳)有限公司 Image processing method, unit, medium for terminal device
CN110377768B (en) * 2019-06-10 2022-03-08 万翼科技有限公司 Intelligent graph recognition system and method
CN110377768A (en) * 2019-06-10 2019-10-25 万翼科技有限公司 It is a kind of intelligently to know drawing system and method
CN112116690A (en) * 2019-06-19 2020-12-22 腾讯科技(深圳)有限公司 Video special effect generation method and device and terminal
CN110708594A (en) * 2019-09-26 2020-01-17 三星电子(中国)研发中心 Content image generation method and system
US11810336B2 (en) 2019-10-14 2023-11-07 Beijing Bytedance Network Technology Co., Ltd. Object display method and apparatus, electronic device, and computer readable storage medium
CN110807728A (en) * 2019-10-14 2020-02-18 北京字节跳动网络技术有限公司 Object display method and device, electronic equipment and computer-readable storage medium
CN113139893B (en) * 2020-01-20 2023-10-03 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN113139893A (en) * 2020-01-20 2021-07-20 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN111368127A (en) * 2020-03-06 2020-07-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111368127B (en) * 2020-03-06 2023-03-24 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111667562B (en) * 2020-05-07 2023-07-28 深圳思为科技有限公司 Picture material-based dynamic effect interface generation method and device
CN111667562A (en) * 2020-05-07 2020-09-15 深圳思为科技有限公司 Dynamic effect interface generation method and device based on picture materials
CN111626258A (en) * 2020-06-03 2020-09-04 上海商汤智能科技有限公司 Sign-in information display method and device, computer equipment and storage medium
CN111626258B (en) * 2020-06-03 2024-04-16 上海商汤智能科技有限公司 Sign-in information display method and device, computer equipment and storage medium
CN112927349A (en) * 2021-02-22 2021-06-08 北京市商汤科技开发有限公司 Three-dimensional virtual special effect generation method and device, computer equipment and storage medium
CN112927349B (en) * 2021-02-22 2024-03-26 北京市商汤科技开发有限公司 Three-dimensional virtual special effect generation method and device, computer equipment and storage medium
CN112884909A (en) * 2021-02-23 2021-06-01 浙江商汤科技开发有限公司 AR special effect display method and device, computer equipment and storage medium
CN113473017A (en) * 2021-07-01 2021-10-01 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN113473019A (en) * 2021-07-01 2021-10-01 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN113873168A (en) * 2021-10-27 2021-12-31 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium
CN114416260A (en) * 2022-01-20 2022-04-29 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114416260B (en) * 2022-01-20 2024-06-04 北京字跳网络技术有限公司 Image processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107993191B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN107993191A (en) A kind of image processing method and device
US12001475B2 (en) Mobile image search system
US10956784B2 (en) Neural network-based image manipulation
CN105354248B (en) The recognition methods of distributed image low-level image feature and system based on gray scale
US20220383053A1 (en) Ephemeral content management
CN106599925A (en) Plant leaf identification system and method based on deep learning
CN109173263A (en) A kind of image processing method and device
WO2022121485A1 (en) Image multi-tag classification method and apparatus, computer device, and storage medium
CN102741840B (en) For the method and apparatus to individual scene modeling
CN110097616B (en) Combined drawing method and device, terminal equipment and readable storage medium
CN105117399B (en) Image searching method and device
CN112328823A (en) Training method and device for multi-label classification model, electronic equipment and storage medium
WO2018152822A1 (en) Method and device for generating album, and mobile terminal
CN104951554A (en) Method for matching landscape with verses according with artistic conception of landscape
CN107729946A (en) Picture classification method, device, terminal and storage medium
CN108984555A (en) User Status is excavated and information recommendation method, device and equipment
JP2018169972A (en) Object detection device, detection model generator, program, and method capable of learning based on search result
US11200650B1 (en) Dynamic image re-timing
CN112069338A (en) Picture processing method and device, electronic equipment and storage medium
WO2019244276A1 (en) Search system, search method, and program
CN109409423A (en) A kind of image-recognizing method, device, terminal and readable storage medium storing program for executing
CN112069342A (en) Image classification method and device, electronic equipment and storage medium
CN112069335A (en) Image classification method and device, electronic equipment and storage medium
CN110633377A (en) Picture cleaning method and device
CN106469437B (en) Image processing method and image processing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant