CN107993191B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN107993191B
CN107993191B CN201711243948.XA CN201711243948A CN107993191B CN 107993191 B CN107993191 B CN 107993191B CN 201711243948 A CN201711243948 A CN 201711243948A CN 107993191 B CN107993191 B CN 107993191B
Authority
CN
China
Prior art keywords
target
image data
special effect
information
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711243948.XA
Other languages
Chinese (zh)
Other versions
CN107993191A (en
Inventor
李科慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711243948.XA priority Critical patent/CN107993191B/en
Publication of CN107993191A publication Critical patent/CN107993191A/en
Application granted granted Critical
Publication of CN107993191B publication Critical patent/CN107993191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04

Abstract

The embodiment of the invention discloses an image processing method and device, wherein the method comprises the following steps: acquiring target image data, and inputting the target image data into a convolutional neural network model; acquiring a target label information set matched with the target image data from the convolutional neural network model, wherein label information in the target label information set is used for marking the feature category of the target image data; and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data. By adopting the invention, the complicated steps caused by manually carrying out the special effect processing of the image can be avoided, thereby improving the efficiency of image data processing.

Description

Image processing method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus.
Background
With the continuous development of image technology and the emergence of various emerging image applications, the frequency of photographing or video recording for users is increased day by day, meanwhile, the hue, saturation or brightness of images/videos shot through terminal equipment is obviously improved, and the increasing consumption, entertainment and safety requirements of users cannot be met by pure image display and video playing, so that higher requirements are put forward on the speed and quality of subsequent image processing.
The existing follow-up processing aiming at the image/video is mainly completed by adding special effect materials or adjusting image parameters. For adding special effect materials, a user can independently select the existing sticker to be pasted to any area in the image/video, or directly draw a graffiti in the image/video by hand, or add background music in the image/video, and the like; for adjusting the image parameters, the user may adjust the image parameters according to the target object and the background in the image/video for highlighting the content in the image/video, for example, performing facial beautification or facial feature fine adjustment on a face in the image/video, performing edge sharpening on a building in the image/video to highlight the overall contour, and the like.
Therefore, the efficiency of processing the image/video by manually adding special effect materials or manually adjusting image parameters is low, and the subsequent processing of the image/video in time and efficiently is difficult to achieve.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, which can improve the processing efficiency of images/videos.
One aspect of the present invention provides an image processing method, including:
acquiring target image data, and inputting the target image data into a convolutional neural network model;
acquiring a target label information set matched with the target image data from the convolutional neural network model, wherein label information in the target label information set is used for marking the characteristic category of the target image data;
and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data.
Wherein the obtaining of the target label information set matched with the target image data in the convolutional neural network model comprises:
identifying a target object contour in the target image data, and dividing the target image data into at least one unit target image data according to the target object contour;
inputting the unit target image data into the convolutional neural network model, acquiring first tag information matched with the unit target image data in the convolutional neural network model, and adding the first tag information to the target tag information set.
Wherein the inputting the unit target image data into the convolutional neural network model, and acquiring first tag information matched with the unit target image data in the convolutional neural network model, includes:
inputting the unit target image data into the convolutional neural network model;
extracting the target characteristics of the unit target image data through convolution operation and pooling operation;
comparing the similarity of the target feature with a feature set in the convolutional neural network model;
and acquiring label information corresponding to the features with the maximum similarity in the feature set as first label information matched with the unit target image data.
Extracting target special effect processing information corresponding to the target label information set from the image special effect library, performing special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data, wherein the method comprises the following steps:
extracting target special effect processing information corresponding to the first label information in the target label information set from the image special effect library;
searching a first image area where the unit target image data is located, and adding a multimedia material in the target special effect processing information corresponding to the first label information to the first image area;
and determining the target image data added with the multimedia material as the target special effect image data, and displaying the target special effect image data.
Wherein, still include:
acquiring auxiliary information corresponding to the target image data;
acquiring second label information matched with the auxiliary information, and adding the second label information to the target label information set;
the auxiliary information comprises at least one of an environment parameter, an equipment state parameter and an image remark information keyword;
before the step of determining the target image data to which the multimedia material is added as the target special effect image data and displaying the target special effect image data, the method further includes:
extracting target special effect processing information corresponding to the second label information in the target label information set from the image special effect library;
and identifying a second image area in the target image data, which is associated with the auxiliary information, and adding the multimedia material in the target special effect processing information corresponding to the second label information to the second image area.
Determining target image data to which the multimedia material is added as the target special effect image data, and displaying the target special effect image data, wherein the determining comprises:
and if the target special effect processing information further comprises image adjustment parameters, performing image parameter adjustment on the target image data according to the image adjustment parameters, determining the target image data added with the multimedia material and subjected to parameter adjustment as the target special effect image data, and displaying the target special effect image data.
Wherein, still include:
if a special effect switching instruction is obtained, randomly selecting special effect processing information in the image special effect library as random special effect processing information, and performing special effect updating processing on the target special effect image data according to the random special effect processing information to obtain updated target special effect image data.
Wherein, still include:
acquiring sample image data and sample label information corresponding to the sample image data, wherein the sample label information is used for marking the characteristic category of the sample image data;
and constructing the convolutional neural network model according to the mapping relation between the sample image data and the sample label information.
Another aspect of the present invention provides an image processing apparatus comprising:
the first input module is used for acquiring target image data and inputting the target image data into a convolutional neural network model;
a target tag obtaining module, configured to obtain a target tag information set matched with the target image data in the convolutional neural network model, where tag information in the target tag information set is used to mark a feature category of the target image data;
and the special effect processing module is used for extracting target special effect processing information corresponding to the target label information set in an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data.
Wherein the target tag obtaining module comprises:
an image segmentation unit, configured to identify a target object contour in the target image data, and segment the target image data into at least one unit target image data according to the target object contour;
a first tag obtaining unit, configured to input the unit target image data into the convolutional neural network model, obtain first tag information that matches the unit target image data in the convolutional neural network model, and add the first tag information to the target tag information set.
Wherein the first tag obtaining unit includes:
a second input subunit, configured to input the unit target image data into the convolutional neural network model;
a feature extraction subunit, configured to extract a target feature of the unit target image data through convolution operation and pooling operation;
the comparison subunit is used for carrying out similarity comparison on the target feature and a feature set in the convolutional neural network model;
and the determining subunit is configured to acquire, in the feature set, tag information corresponding to a feature with the largest similarity as first tag information matched with the unit target image data.
Wherein, the special effect processing module comprises:
a first extracting unit configured to extract, in the image special effect library, target special effect processing information corresponding to the first tag information in the target tag information set;
the first adding unit is used for searching a first image area where the unit target image data is located and adding a multimedia material in the target special effect processing information corresponding to the first label information to the first image area;
and the display unit is used for determining the target image data added with the multimedia material as the target special effect image data and displaying the target special effect image data.
Wherein, still include:
the auxiliary information acquisition module is used for acquiring auxiliary information corresponding to the target image data;
the second tag acquisition module is used for acquiring second tag information matched with the auxiliary information and adding the second tag information to the target tag information set;
the auxiliary information comprises at least one of an environment parameter, an equipment state parameter and an image remark information keyword;
the special effect processing module further comprises:
a second extracting unit, configured to extract, in the image special effect library, target special effect processing information corresponding to the second tag information in the target tag information set;
and the second adding unit is used for identifying a second image area in the target image data, which is associated with the auxiliary information, and adding the multimedia material in the target special effect processing information corresponding to the second label information to the second image area.
The display unit is specifically configured to, if the target special effect processing information further includes an image adjustment parameter, perform image parameter adjustment on the target image data according to the image adjustment parameter, determine the target image data to which the multimedia material is added and which is subjected to parameter adjustment as the target special effect image data, and display the target special effect image data.
Wherein, still include:
and the random selection module is used for randomly selecting special effect processing information in the image special effect library as random special effect processing information if a special effect switching instruction is obtained, and performing special effect updating processing on the target special effect image data according to the random special effect processing information to obtain updated target special effect image data.
Wherein, still include:
the sample label acquiring module is used for acquiring sample image data and sample label information corresponding to the sample image data, wherein the sample label information is used for marking the characteristic category of the sample image data;
and the construction module is used for constructing the convolutional neural network model according to the mapping relation between the sample image data and the sample label information.
Another aspect of the present invention provides an image processing apparatus comprising: a processor and a memory;
the processor is connected to a memory, wherein the memory is used for storing program codes, and the processor is used for calling the program codes to execute the method according to an aspect of the embodiment of the invention.
Another aspect of the embodiments of the present invention provides a computer storage medium storing a computer program, the computer program comprising program instructions that, when executed by a processor, perform a method as in one aspect of the embodiments of the present invention.
The embodiment of the invention obtains target image data and inputs the target image data into a convolutional neural network model; acquiring a target label information set matched with the target image data in the convolutional neural network model by utilizing a classification function of a convolutional neural network, wherein label information in the target label information set is used for marking the feature category of the target image data; and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data. Therefore, in the whole process of carrying out special effect processing on the target image data, the characteristic type of the target image data can be automatically identified without manual participation, and the special effect processing is automatically carried out on the target image data, so that the complicated steps caused by manual image special effect processing can be avoided, and the efficiency of image data processing is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic scene diagram of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
fig. 2a and fig. 2b are schematic interface diagrams of an image processing method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of acquiring a target tag information set according to an embodiment of the present invention;
FIG. 4 is a flow chart of another image processing method according to an embodiment of the present invention;
fig. 4a and 4b are schematic interface diagrams of another image processing method provided by the embodiment of the invention;
FIG. 5 is a flowchart illustrating another exemplary image processing method according to the present invention;
FIGS. 5a and 5b are schematic interface diagrams of another image processing method according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating another exemplary image processing method according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a first tag obtaining unit according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Fig. 1 is a scene schematic diagram of an image processing method according to an embodiment of the present invention. As shown in fig. 1, a user may open an application 100a (e.g., a photo application, a video application, etc.) storing an image or a video from a smartphone, and select a picture 200a from the application 100a as target image data requiring special effect processing, or of course, the user may open a camera application and take a photo or a video in real time, and take a video frame of the photo or the video taken in real time as the target image data. The picture 200a is input into the constructed convolutional neural network model, the classification function of the convolutional neural network model is utilized to acquire the label information corresponding to the picture 200a, the label information is the label information "lady bag" and the label information "luxury goods", and the label information "lady bag" and the label information "luxury goods" are added into the target label information set. Searching target special effect processing information corresponding to label information 'ladies' in a target label information set from an image special effect library stored in a local place, wherein the target special effect processing information is a picture label (namely a multimedia material of a picture type) which is used for being added into target image data and contains 'beautiful Da' characters; similarly, target special effect processing information corresponding to the tag information "luxury goods" in the target tag information set is searched, and the target special effect processing information is a picture tag which is used for being added to target image data and contains a "$88888" character, so that according to the target special effect processing information corresponding to the target image data, a picture tag which contains a "beautiful" character is added to the picture 200a, and a picture tag which contains a "$88888" character is added to the picture 200 a. After the special effect processing is finished, target special effect image data after the special effect processing is finished is displayed on a screen of the smart phone for a user to preview, and if the user is satisfied with the target special effect image data, the target special effect image data can be stored in a local folder of the smart phone or directly uploaded to a social network site (such as a WeChat friend circle, a QQ space and the like) by clicking a 'confirm' button; if the user is not satisfied with the target special effect image data, the target special effect image data can be deleted by clicking a cancel button, target special effect processing information is randomly selected from an image special effect library to serve as target special effect processing information to carry out special effect updating on the target image data, and the target special effect image data after the special effect updating is displayed on a screen of the smart phone.
The specific flow of performing special effect processing on an image may refer to the following embodiments corresponding to fig. 2 to 6.
Further, please refer to fig. 2, which is a flowchart illustrating an image processing method according to an embodiment of the present invention. As shown in fig. 2, the method may include:
step S101, acquiring target image data, and inputting the target image data into a convolutional neural network model;
specifically, the terminal device may determine target image data through a video frame of a picture or a video selected by a user from an album application or a video application, and then input the determined target image data into a convolutional neural network model, where the convolutional neural network model may include an input layer, a convolutional layer, a pooling layer, a full-link layer, and an output layer, so that the target image data is first input into the input layer of the convolutional neural network model, and the convolutional neural network model is a feed-forward neural network capable of detecting and identifying an image by establishing a pattern classifier. The terminal device may include a mobile phone, a tablet computer, a notebook computer, a handheld computer, a Mobile Internet Device (MID), a Point Of Sale (POS) machine, a wearable device (e.g., a smart watch, a smart bracelet, etc.), or other terminal devices having a function Of storing image data or video data, where the target image data may be a picture or any video frame in a video.
Step S102, a target label information set matched with the target image data is obtained in the convolutional neural network model, and label information in the target label information set is used for marking the feature category of the target image data;
specifically, after the target image data is input to the input layer in the convolutional neural network model, the convolutional neural network model is trained by the sample image data and the sample label information corresponding to the sample image data, that is, the convolutional neural network model has an image classification function. Therefore, label information matched with the target image data is obtained by utilizing the characteristic extraction and classification functions of the convolutional neural network model, and the label information is added into the target label information set. Because of the diversity of the target objects in the target image data (for example, the sky, the car, the person, the building, and the like exist in the target image data), one or more tag information may be included in the target tag information set matched with the target image data. Referring to fig. 4a together, fig. 4a is an interface schematic diagram of an image processing method according to an embodiment of the present invention, and taking fig. 4a as an example, since the target image data in fig. 4a includes features of both buildings and animals, the tag information obtained from the convolutional neural network model and matched with the target image data in fig. 4a is: the tag information is 'building' and 'animal', and the two tag information are added to the target tag information set. If there are a plurality of tag information in the target tag information set, it is further necessary to determine the image area where the feature corresponding to each tag information is located, and in fig. 4a, the image area where the feature "puppy" corresponding to the tag information "animal" in the target tag information set is located is the image area 300a, and the image area where the feature "house" corresponding to the tag information "building" in the target tag information set is located is the image area 500a.
The label information in the target label information set is used to mark a feature type of the target image data, where the feature type may be a color type feature and a texture type feature representing an object corresponding to an image or an image area, a shape feature type representing an essential attribute of the image, or a spatial relationship feature type representing a positional relationship between a plurality of target objects in the image. The label information can be numbers or other character information with identification distinguishing meaning. For example, if the tag information is a value of 1, the corresponding feature type is "person"; if the label information is a numerical value of 2, the corresponding feature type is "building" or the like, in other words, there is a mapping relationship between the label information and the feature type.
Step S103, extracting target special effect processing information corresponding to the target label information set from an image special effect library, performing special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data.
Specifically, after determining an image area where a target tag information set and a feature corresponding to each tag information in the set are located, target special effect processing information corresponding to the target tag information set is extracted from an image special effect library, where the target special effect processing information may be multimedia material for adding to target image data (for example, when the target image data is a wedding scene picture, the target special effect processing information is a love picture for adding to the target image data, when the target image data is a rain scene picture, the target special effect processing information is a lightning picture for adding to the target image data, and when the target image data is a video frame in a video, the target special effect processing information is background music for adding to the target image data). The process of adjusting the image parameters of the target image data may be: the saturation of the target image data may be dimmed or the brightness of the target image data may be dimmed or dimmed (e.g., if the target object in the target image data is identified to include the sun, the target special effects processing information may include an image adjustment parameter for dimming the saturation of the target image data, and if the target object in the target image data is identified to include rain, the target special effects processing information may include an image adjustment parameter for decreasing the brightness of the target image data). And then, according to the extracted target special effect processing information and an image area where the feature corresponding to each tag information in the target tag information set is located, carrying out special effect processing on the corresponding image area, obtaining target special effect image data after the special effect processing is finished, and displaying the target special effect image data on a screen of the terminal equipment so that a user can preview the target special effect image data, wherein if the user is satisfied with the target special effect image data, the target special effect image data can be directly stored in a photo application or a video application.
The image special effect library can be stored in a local file of the terminal equipment, and when the target special effect processing information is searched, the target special effect processing information is directly searched and obtained in the local file; the target special effect processing information can also be stored in the cloud server, and the cloud server is accessed through the network and the target special effect processing information is obtained when the target special effect processing information is searched.
Fig. 2a and fig. 2b are schematic diagrams of an interface of an image processing method according to an embodiment of the present invention. As shown in fig. 2a, target image data selected by a user is displayed on a screen of a terminal device, the target image data is input into a convolutional neural network model, tag information matched with the target image data is acquired from the convolutional neural network model and is "vehicle", the tag information "vehicle" is added into a target tag information set, and an image area where a feature "automobile" corresponding to the tag information "vehicle" is located is determined as an image area 600a in fig. 2 a. Target special effect processing information corresponding to tag information "transportation means" in the target tag information set is searched in the image library as a tail gas picture to be added to the target image data, and after the terminal device extracts the tail gas picture from a local file or a cloud server, the tail gas picture is added to an image area 600a in the target image data, so that the target special effect image data to which the tail gas picture is added is obtained as shown in fig. 2 b.
Further, please refer to fig. 3, which is a schematic flowchart illustrating a process of acquiring a target tag information set according to an embodiment of the present invention. As shown in fig. 3, the steps of step S201 to step S205 are specifically described for step S102 in the embodiment corresponding to fig. 2, that is, the steps of step S201 to step S205 are a specific flow for acquiring a target tag information set according to the embodiment of the present invention, and specifically may include the following steps:
step S201, identifying a target object contour in the target image data, and dividing the target image data into at least one unit target image data according to the target object contour;
specifically, if the target image data is a color picture, the color feature of the target image data can be extracted by counting a color histogram of the target image data, so as to identify the contour of a target object in the target image data, wherein the color histogram is a statistical graph of the proportion of different colors in the whole target image data; texture features of target image data can be extracted by calculating four key features (energy, inertia, entropy and correlation) in the gray level co-occurrence matrix, a target object contour in the target image data is further identified, the target image data is divided into one or more unit target image data according to the identified target object contour, an image area where each unit target image data is located is recorded simultaneously so as to be convenient for carrying out special effect processing in a corresponding image area subsequently, and each unit target image data is an image area where the target image data has unique properties. Referring to fig. 4b, which is an interface schematic diagram of another image processing method according to an embodiment of the present invention, and taking fig. 4b as an example, if a target object in target image data has both an animal and a building, the target image data is divided into two unit target image data, where one unit target image data includes an animal feature image and the other unit target image data includes a building feature image.
Alternatively, the target image data is divided into one or more unit target image data by directly calling an image division algorithm (for example, a threshold division algorithm, a region division algorithm, or an edge division algorithm), and the image region where each unit target image data is located is recorded.
In view of this, in order to better understand the present solution, the embodiment of the present invention only takes one unit of target image data as an example to further perform the following steps S202 to S205.
Step S202, inputting the unit target image data into the convolutional neural network model;
specifically, in order to improve the accuracy of image recognition, the segmented unit target image data may be resized to a fixed size, and then the resized unit target image data may be input to the input layer in the convolutional neural network model. The segmented unit target image data can be randomly input into the convolutional neural network model, and can also be sequentially input into the convolutional neural network model according to the sequence of acquiring the unit target image data. It is understood that the above-described convolutional neural network has been constructed in advance, and the parameter size of the input layer is equal to the size of the resized unit target image data.
Step S203, extracting the target characteristics of the unit target image data through convolution operation and pooling operation;
specifically, after the unit target image data is input to the output layer of the convolutional neural network, the unit target image data enters the convolutional layer, a small block in the unit target image data is selected randomly as a sample, some features are learned from the small sample, and then the sample is used as a window to sequentially slide through all pixel regions of the unit target image data, that is, the features learned from the sample are convolved with the unit target image data, so that the most significant features on different positions of the unit target image data are obtained. After the convolution operation is completed, the features of the unit target image data are extracted, but the number of the features extracted only through the convolution operation is large, in order to reduce the calculation amount, pooling operation is needed, namely, the features extracted through the convolution operation from the unit target image data are transmitted to a pooling layer, aggregation statistics is carried out on the extracted features, the order of magnitude of the statistical features is far lower than that of the features extracted through the convolution operation, meanwhile, the classification effect is improved, and the commonly used pooling method mainly comprises an average pooling operation method and a maximum pooling operation method. The average pooling operation method is to calculate an average characteristic in a characteristic set to represent the characteristic of the characteristic set; the maximum pooling operation is to extract the feature of which the maximum feature represents the feature set from the feature set. Through convolution operation and pooling operation, the most obvious target features of unit target image data can be extracted, and the number of the target features is small. It should be noted that the convolutional layer in the convolutional neural network may have only one layer or multiple layers, and the pooling layer may have only one layer or multiple layers.
Step S204, carrying out similarity comparison on the target features and the feature set in the convolutional neural network model;
specifically, the output layer of the trained convolutional neural network model is a classifier, the number of nodes of the classifier is the same as the number of classes of the sample label information and is also consistent with the number of feature classes of the sample image data, each classification node of the classifier comprises a feature set extracted from the sample image data corresponding to each type of sample label information, and the number of the feature sets is the same as the number of the sample label information classes. After the target features of the unit target image data are extracted, similarity comparison is carried out on the target features and a feature set of an output layer in a convolutional neural network model, the similarity can be measured in a distance measuring mode, if the distance between the two features is shorter, the similarity between the two features is larger, and the distance measuring mode can comprise an Euclidean distance measuring mode, a Mahalanobis distance measuring mode or a Hamming distance measuring mode.
Step S205, in the feature set, obtaining the tag information corresponding to the feature with the largest similarity as the first tag information matched with the unit target image data, and adding the first tag information to the target tag information set.
Specifically, after the target feature is compared with a feature set in the convolutional neural network model, sample tag information corresponding to a feature with the maximum similarity is obtained, the obtained sample tag information is determined as first tag information matched with the unit target image data, and the first tag information is added to the target tag information set.
For example, the output layer in the convolutional neural network model has 3 feature sets, where the feature set a is a feature set extracted from sample image data corresponding to sample label information "person"; the feature set B is a feature set extracted from sample image data corresponding to sample label information "building"; the feature set C is a feature set extracted from sample image data corresponding to sample tag information "animal", an object feature extracted from unit target image data is a feature "d", the feature "d" is distance-measured from features in the feature set a, the feature set B, and the feature set C, if the distance between the feature "d" and a feature in the feature set B is minimum, it is determined that first tag information corresponding to the unit target image data is sample tag information corresponding to the feature set B, that is, first tag information matched with the unit target image data is "building", and the first tag information "building" is added to the target tag information set.
The embodiment of the invention obtains target image data and inputs the target image data into a convolutional neural network model; acquiring a target label information set matched with the target image data in a convolutional neural network model by utilizing a classification function of the convolutional neural network model, wherein label information in the target label information set is used for marking the characteristic category of the target image data; and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data. Therefore, in the whole process of carrying out special effect processing on the target image data, the characteristic type of the target image data can be automatically identified without manual participation, and the special effect processing is automatically carried out on the target image data, so that the complicated steps caused by manual image special effect processing can be avoided, and the efficiency of image data processing is improved. And simultaneously, a plurality of characteristic categories in the target image data are identified, and target special effect processing information corresponding to a plurality of label information in the target label information set is respectively executed, so that special effect processing modes can be enriched, and the special effect processing effect is improved.
Further, please refer to fig. 4, which is a flowchart illustrating another image processing method according to an embodiment of the present invention. As shown in fig. 4, the method may include the steps of:
step S301, acquiring target image data, and inputting the target image data into a convolutional neural network model;
step S302, a target label information set matched with the target image data is obtained from the convolutional neural network model, and label information in the target label information set is used for marking the feature category of the target image data;
for specific implementation processes of step S301 to step S302, reference may be made to the description of step S101 to step S102 in the embodiment corresponding to fig. 2, and details will not be further described here.
Step S303, extracting target special effect processing information corresponding to the first label information in the target label information set from the image special effect library;
specifically, according to first tag information in the target tag information set, target special effect processing information corresponding to the first tag information is extracted from the image special effect library, where the target special effect processing information may include characters and pictures for adding to target image data, or a purchase link for adding to a target object in the target image data, and the like. And if more than one first label information in the target label information set is available, respectively extracting target special effect processing information corresponding to each first label information in the image special effect library. The process of acquiring the first tag information may refer to the description of step S201 to step S205 in fig. 3.
Step S304, searching a first image area where the unit target image data is located, and adding a multimedia material in the target special effect processing information corresponding to the first label information to the first image area;
specifically, since the unit target image data is segmented according to the contour of the target object in the target image data, in order to further closely match the image content in the target image data with the subsequent special effect processing, a first image area where the unit target image data is located is firstly searched, and then the multimedia material in the target special effect processing information corresponding to the first tag information extracted from the image special effect library is added to the first image area. If the target image data comprises a plurality of unit target image data, respectively searching a first image area where each unit target image data is located, and sequentially adding multimedia materials in target special effect processing information corresponding to first label information matched with the unit target image data to the corresponding first image area. Optionally, if it is detected that there is an association relationship between the first tag information corresponding to two or more unit target image data (the association relationship may be preset, for example, it is preset that tag information a and tag information B have an association relationship, and it is preset that tag information a and tag information C also have an association relationship), and it is recognized that the pixel distance between the target objects in the unit target image data is smaller than the distance threshold, target special effect processing information commonly mapped by the first tag information corresponding to the unit target image data may be extracted, and a multimedia material in the target special effect processing information may be added to a first image area including the unit target image data (that is, the first image area includes the target object in the unit target image data). For example, two pieces of unit target image data may be divided from the target image data, where the first label information corresponding to one piece of unit target image data is "woman" and the first label information corresponding to the other piece of unit target image data is "man" (there is an association between the first label information "woman" and the first label information "man"), and the pixel distance between the target objects (i.e., the character images) in the two pieces of unit target image data is smaller than the distance threshold value, the target special effect processing information (the multimedia material in the target special effect processing information is a love picture) mapped in common with the first label information "woman" and the first label information "man" may be found in the image special effect library, and the image area where the two pieces of unit target image data are located in common may be determined as the first image area (i.e., the first image area includes the woman image and the man image), and the love picture in the target special effect processing information may be added to the first image area, for example, the love picture and the man image and the woman image may be added to the middle special effect area.
Step S305, determining the target image data to which the multimedia material has been added as the target special effect image data, and displaying the target special effect image data.
Specifically, the target image data to which the multimedia material is added is determined as target special effect image data, the target special effect image data is displayed on a screen of the terminal device, and a user can directly save the target special effect image data into a local folder by clicking a save button or delete the target special effect image data by clicking a delete button.
Referring to fig. 4a and 4b together, the target image data in fig. 4a is divided into two unit target image data by recognizing the contour of the target object in the target image data, where one unit target image data includes a house characteristic image and the other unit target image data includes a puppy characteristic image, and the first image areas where the two unit target image data are located are respectively the image area 500a and the image area 300a. Respectively inputting the two unit target image data into a convolutional neural network model, acquiring first label information matched with the unit target image data containing the house characteristic image as a building by utilizing the characteristic extraction and image classification functions of the convolutional neural network model, acquiring first label information matched with the unit target image data containing the puppy characteristic image as an animal, extracting target special effect processing information corresponding to the first label information of the building as a flash picture added into the target image data in an image special effect library, and acquiring target special effect processing information corresponding to the first label information of the animal as a footprint picture added into the target image data. Adding the above flash picture to the first image area 500a and adding the footprint picture to the first image area 300a results in the target special effect image data as shown in fig. 4b, and displays the target special effect image data on the screen of the terminal device.
The embodiment of the invention obtains target image data and inputs the target image data into a convolutional neural network model; acquiring a target label information set matched with the target image data in a convolutional neural network model by utilizing a classification function of the convolutional neural network model, wherein label information in the target label information set is used for marking the characteristic category of the target image data; and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data. Therefore, in the whole process of processing the special effect of the target image data, the characteristic type of the target image data can be automatically identified without manual participation, the special effect processing is automatically carried out on the target image data, further, the complicated steps caused by manual image special effect processing can be avoided, and the efficiency of processing the image data is improved. And simultaneously, a plurality of characteristic categories in the target image data are identified, and target special effect processing information corresponding to a plurality of label information in the target label information set is respectively executed, so that special effect processing modes can be enriched, and the special effect processing effect is improved.
Further, please refer to fig. 5, which is a flowchart illustrating another image processing method according to an embodiment of the present invention. As shown in fig. 5, the method may include the steps of:
step S401, acquiring target image data, and inputting the target image data into a convolutional neural network model;
step S402, a target label information set matched with the target image data is obtained from the convolutional neural network model, and label information in the target label information set is used for marking the feature type of the target image data;
for the specific implementation process of steps S301 to S302, reference may be made to the description of steps S101 to S102 in the embodiment corresponding to fig. 2, and details will not be further described here.
Step S403, extracting, in the image special effect library, target special effect processing information corresponding to the first tag information in the target tag information set;
step S404, searching a first image area where the unit target image data is located, and adding a multimedia material in the target special effect processing information corresponding to the first label information to the first image area;
the specific implementation manner of step S403 and step S404 may refer to the description of step S303 and step S304 in the embodiment corresponding to fig. 4, and the process of acquiring the first tag information may refer to the description of step S201 to step S205 in fig. 3, which will not be described again.
Step S405, acquiring auxiliary information corresponding to the target image data;
specifically, the terminal device obtains auxiliary information corresponding to the target image data, where the auxiliary information includes at least one of an environmental parameter, a device state parameter, and an image remark information keyword. The environmental parameter comprises one or more of time, location, or weather; the device state parameters include one or more of temperature, acceleration, and velocity; the image remark information keyword is a keyword extracted from the remark information by using a keyword extraction algorithm (for example, a word frequency-reverse file frequency algorithm, a topic model algorithm, a text sorting algorithm, and the like) by acquiring the remark information input by a user. When the environmental parameter is time, EXIF (Exchangeable Image File) Image information of the target Image data is read, and time corresponding to the target Image data is acquired from the EXIF Image information. When the environmental parameter is a position and the target image data is obtained by real-time shooting through a camera application by a user, the position of the target image data shot by the terminal equipment can be positioned through a Global Positioning System (GPS), and then the position is identified so as to identify the position corresponding to the target image data; if the target image data is image data extracted from a local album application or a video application, a position corresponding to the target image data may be acquired from the EXIF image information. And when the environmental parameter is weather, acquiring the weather corresponding to the target image data by inquiring the weather where the time corresponding to the target image data is located. When the device state parameter is temperature, a temperature corresponding to the target image data is acquired by the thermometer. When the device parameter is acceleration, the change of the acceleration force is sensed through an acceleration sensor (Accelerometer-sensor), and the acceleration corresponding to the target image data is acquired.
Step S406, second label information matched with the auxiliary information is obtained, and the second label information is added to the target label information set;
specifically, after the auxiliary information corresponding to the target image data is acquired, second tag information matched with the auxiliary information is acquired, and the second tag information is added to the target tag information set. When the time in the environmental parameter is acquired, a time range in which the time is located is identified, second tag information corresponding to the time range is acquired, and the second tag information is added to the target tag information set, for example, the time when the target image data is acquired from the EXIF image information is time 23, the time range when the time 23 is identified is late night, and therefore the time when the second tag information corresponding to the late night is acquired is "night", and the second tag information "night" is added to the target tag information set. After acquiring the position in the environmental parameter, the second tag information corresponding to the position is identified, and the second tag information is added to the target tag information set, for example, the position corresponding to the target image data is identified as beijing · sudoku through GPS positioning, the second tag information corresponding to beijing · sudoku is identified as "building", and the second tag information "building" is added to the target tag information set. After the weather in the environmental parameters is acquired, second tag information corresponding to the weather is acquired, and the second tag information is added to the target tag information set, for example, it is extracted through the weather application in the terminal device that the weather corresponding to the target image data is "clear", so that the second tag information corresponding to the weather "clear" is acquired as "sunshine", and the second tag information "sunshine" is added to the target tag information set. After the temperature in the equipment state parameter is obtained, the second label information corresponding to the temperature is identified and added to the target label information set, and similarly, the second label information corresponding to the speed, the acceleration and the image remark information keyword can be respectively added to the target label information set.
Step 407, extracting, in the image special effect library, target special effect processing information corresponding to the second tag information in the target tag information set;
specifically, according to second tag information in the target information set, target special effect processing information corresponding to the second tag information is extracted from the image special effect library, and the target special effect processing information may include characters and added pictures for adding to target image data, or a purchase link for adding to a target object in the target image data. And if more than one second label information in the target label information set is available, respectively extracting target special effect processing information corresponding to each second label information in the image special effect library.
Step S408, identifying a second image area in the target image data, which is associated with the auxiliary information, and adding a multimedia material in target special effect processing information corresponding to the second label information to the second image area;
specifically, since the second tag information is derived from the auxiliary information corresponding to the target image data, in order to further adapt to the image content in the target image data for the subsequent special effect processing, first, a second image region associated with the auxiliary information in the target image data is identified, then, the multimedia material in the target special effect processing information corresponding to the second tag information extracted from the image special effect library is added to the second image region, if a plurality of second tag information is included in the target tag information set, the second image region associated with the auxiliary information matching each second tag information is respectively identified, the target special effect processing information corresponding to each second tag information is extracted, and the multimedia material in the target special effect processing information corresponding to each second tag information is sequentially added to the associated second image region.
Step S409, if the target special effect processing information further includes an image adjustment parameter, performing image parameter adjustment on the target image data according to the image adjustment parameter, determining the target image data to which the multimedia material has been added and which has been subjected to parameter adjustment as the target special effect image data, and displaying the target special effect image data.
Specifically, after the multimedia material in the target special effect processing information corresponding to the first tag information is added to the first image area in the target image data, and after the multimedia material in the target special effect processing information corresponding to the second tag information is added to the second image area in the target image data, if the target special effect processing information further includes an image adjustment parameter, the image parameter of the target image data is adjusted, where the image parameter may include brightness or saturation (for example, if it is recognized that the target object in the target image data includes a human face, the target special effect processing information may include an image adjustment parameter for brightening the target image data). And determining the multimedia material added with the target special effect processing information corresponding to the first tag information, the multimedia material in the target special effect processing information corresponding to the second tag information and the parameter-adjusted target image as target special effect image data, and displaying the target special effect image data on a screen of the terminal equipment.
Further, please refer to fig. 5a and 5b, which are schematic interface diagrams of another image processing method according to an embodiment of the present invention. Inputting the target image data in fig. 5a into the convolutional neural network model, acquiring first target label information matched with the target image data from the convolutional neural network model as a "building", and determining that the first image area is an image area containing a building feature image, that is, the first image area in fig. 5 b. The auxiliary information corresponding to the target image data is acquired from the weather application in the terminal device as "rainstorm", the second target tag information corresponding to the auxiliary information "rainstorm" is identified as "rain scene", and the second image area associated with the auxiliary information "rainstorm" in the target image data is determined as an image area including a sky feature image, that is, the second image area in fig. 5 b. In the image special effect library, target special effect processing information corresponding to the first label information 'building' is searched for as a flash picture for adding to target image data, and target special effect processing information corresponding to the second label information 'rain scene' is searched for as a lightning picture for adding to the target image data. Adding a flash picture to the first image area in fig. 5b and a lightning picture to the second image area in fig. 5b results in target special effect image data as shown in fig. 5b and displaying the target special effect image data on the screen of the terminal device.
The embodiment of the invention obtains target image data and inputs the target image data into a convolutional neural network model; acquiring a target label information set matched with the target image data from a convolutional neural network model by utilizing a classification function of the convolutional neural network model, wherein label information in the target label information set is used for marking the feature category of the target image data; and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data. Therefore, in the whole process of processing the special effect of the target image data, the characteristic type of the target image data can be automatically identified without manual participation, the special effect processing is automatically carried out on the target image data, further, the complicated steps caused by manual image special effect processing can be avoided, and the efficiency of processing the image data is improved. And simultaneously, a plurality of characteristic categories in the target image data are identified, and target special effect processing information corresponding to a plurality of label information in the target label information set is respectively executed, so that special effect processing modes can be enriched, and the special effect processing effect is improved.
Further, please refer to fig. 6, which is a flowchart illustrating another image processing method according to an embodiment of the present invention. As shown in fig. 6, the image processing method includes the steps of:
step S501, obtaining sample image data and sample label information corresponding to the sample image data, wherein the sample label information is used for marking the feature category of the sample image data;
specifically, the terminal device may download image data from the image database as sample image data, set corresponding sample label information for each sample image data according to image content in the sample image data, or obtain image data on which sample label information has been printed for each sample image data, where the sample label information is used to mark a feature category of the sample image data, and the sample label information may be a number or other characters with a distinguishing identification meaning. For example, when the sample label information is "person" or the numerical value 1, the corresponding sample image data includes feature categories such as men, women, children, or elderly people; when the sample label information is 'animal' or numerical value 2, the corresponding sample image data includes characteristic categories such as cats, mice, dogs and the like; when the sample label information is "building" or the numerical value 3, the corresponding sample image data includes feature categories such as a great wall, an old palace, or a london tower.
Step S502, constructing the convolutional neural network model according to the mapping relation between the sample image data and the sample label information;
specifically, the sample image data carries corresponding sample label information, the same sample label information is of one type, the type is taken as a unit, the types of the multiple sample image data are respectively input into an input layer in a convolutional neural network model, the convolutional neural network model is constructed, and the parameter size of the input layer is equal to the size of the input sample image data. After the sample image data is input to the input layer of the convolutional neural network, firstly, a small block in the sample image data is randomly selected as a sample, some features are learned from the small sample, and then the sample is used as a window to sequentially slide through all pixel regions of the sample image data, namely, the features learned from the sample are convolved with the sample image data, so that the most significant features on different positions of the sample image data are obtained. After the convolution operation is completed, the features of the sample image data are extracted, but the number of the features extracted only through the convolution operation is large, in order to reduce the calculation amount, pooling operation is needed, namely aggregation statistics is carried out on the features extracted through the convolution operation, the order of magnitude of the statistical features is far lower than that of the features extracted through the convolution operation, and meanwhile, the classification effect is improved. The average pooling operation is to calculate an average characteristic in a characteristic set as the characteristic representing the characteristic set; the maximum pooling operation is to extract the maximum feature in a feature set to represent the features of the feature set. Through convolution operation and pooling operation, the most significant sample features of the sample image data can be extracted, and the number of the sample features is small.
The output layer of the convolutional neural network is a classifier, because the input sample image data carries sample label information and the convolutional neural network model is constructed by taking the same sample label information as a class, the classification node number of the classifier is the same as the class number of the sample label information, and meanwhile, the class number of the sample label information is also the same as the characteristic class number of the sample image data, namely, how many classes of the input sample label information have classification nodes when the output layer of the convolutional neural network model is constructed. For example, sample image data with sample label information of "person" has A1, A2, A3, sample image data with sample label information of "animal" has B1, B2, B3, sample image data with sample label information of "building" has C1, C2, C3, a convolutional neural network model constructed by convolution operation and pooling operation according to sample image data A1, A2, A3, B1, B2, B3, C1, C2, C3 carrying sample label information, a classifier of an output layer of the convolutional neural network model has 3 nodes respectively corresponding to sample label information of "person", "animal" and "building", and 3 nodes also correspond to 3 feature sets T1, T2, T3, the features in the feature set T1 are features extracted from the sample image data A1, A2, A3; the features in the feature set T2 are features extracted from the sample image data B1, B2, B3; the features in the feature set T3 are features extracted from the sample image data C1, C2, C3.
Step S503, acquiring target image data, and inputting the target image data into a convolutional neural network model;
step S504, a target label information set matched with the target image data is obtained from the convolutional neural network model, and label information in the target label information set is used for marking the feature type of the target image data;
step S505, extracting target special effect processing information corresponding to the target label information set from an image special effect library, performing special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data;
for specific implementation processes of step S503 to step S505, reference may be made to the description of step S101 to step S103 in the embodiment corresponding to fig. 2, and details will not be further described here.
Step S506, if a special effect switching instruction is obtained, randomly selecting special effect processing information in the image special effect library as random special effect processing information, and performing special effect updating processing on the target special effect image data according to the random special effect processing information to obtain updated target special effect image data;
specifically, the user previews the generated target special effect image data through a screen, and clicks a "random" button to generate a special effect switching instruction if the target special effect image data is not satisfied. Therefore, if the terminal device obtains the special effect switching instruction, the target special effect processing information is randomly selected from the image special effect library as random target special effect processing information, the target special effect image data is subjected to special effect processing according to the random target special effect processing information to obtain updated target special effect image data, and the updated target special effect image data is displayed on the screen of the terminal device. The specific process of generating the updated target special effect image data can be referred to step S303 to step S305 in fig. 4 or step S403 to step S409 in fig. 5, which is not described herein again.
After the target special effect image data is generated, if the user directly clicks the "save" button, it can be shown that the target tag information set corresponding to the target image data, which is obtained from the convolutional neural network model, can accurately identify the feature type of the target image data (that is, meet the user's expectation), so that the target image data and the tag information in the corresponding target tag information set can be input into the convolutional neural network model as sample image data and sample tag information, and used for continuously optimizing the parameters of the convolutional neural network model to generate a more accurate target tag information set for the next target image data, so as to improve the accuracy of image identification of the convolutional neural network model.
The embodiment of the invention obtains target image data and inputs the target image data into a convolutional neural network model; acquiring a target label information set matched with the target image data from a convolutional neural network model by utilizing a classification function of the convolutional neural network model, wherein label information in the target label information set is used for marking the feature category of the target image data; and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data. Therefore, in the whole process of processing the special effect of the target image data, the characteristic type of the target image data can be automatically identified without manual participation, the special effect processing is automatically carried out on the target image data, further, the complicated steps caused by manual image special effect processing can be avoided, and the efficiency of processing the image data is improved. And simultaneously, a plurality of characteristic categories in the target image data are identified, and target special effect processing information corresponding to a plurality of label information in the target label information set is respectively executed, so that special effect processing modes can be enriched, and the special effect processing effect is improved.
Further, please refer to fig. 7, which is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 7, the image processing apparatus 1 may be applied to the terminal device in the embodiment corresponding to fig. 2, and the image processing apparatus 1 may include: the system comprises a first input module 10, a target label obtaining module 20 and a special effect processing module 30;
the first input module 10 is used for acquiring target image data and inputting the target image data into a convolutional neural network model;
a target tag obtaining module 20, configured to obtain a target tag information set matched with the target image data in the convolutional neural network model, where tag information in the target tag information set is used to mark a feature category of the target image data;
a special effect processing module 30, configured to extract target special effect processing information corresponding to the target label information set from an image special effect library, perform special effect processing on the target image data according to the target special effect processing information, obtain target special effect image data, and display the target special effect image data.
For specific functional implementation manners of the first input module 10, the target tag obtaining module 20, and the special effect processing module 30, reference may be made to steps S101 to S103 in the corresponding embodiment of fig. 2, which is not described herein again.
Referring to fig. 7 together, the image processing apparatus 1 may further include: the system comprises a sample label obtaining module 40, a construction module 50, an auxiliary information obtaining module 60, a second label obtaining module 70 and a random selecting module 80;
a sample label acquiring module 40, configured to acquire sample image data and sample label information corresponding to the sample image data, where the sample label information is used to mark a feature category of the sample image data;
a constructing module 50, configured to construct the convolutional neural network model according to a mapping relationship between the sample image data and the sample label information;
an auxiliary information obtaining module 60, configured to obtain auxiliary information corresponding to the target image data;
a second tag obtaining module 70, configured to obtain second tag information that matches the auxiliary information, and add the second tag information to the target tag information set;
and a random selection module 80, configured to, if a special effect switching instruction is obtained, randomly select target special effect processing information in the image special effect library as random target special effect processing information, and perform special effect update processing on the target special effect image data according to the random target special effect processing information to obtain updated target special effect image data.
The specific functional implementation manners of the sample label obtaining module 40 and the constructing module 50 may refer to steps S501 to S502 in the embodiment corresponding to fig. 6; the specific functional implementation manners of the auxiliary information obtaining module 60 and the second tag obtaining module 70 may refer to step S404 to step S405 in the embodiment corresponding to fig. 5; the specific function implementation manner of the random selection module 80 may refer to step S506 in the embodiment corresponding to fig. 6, which is not described herein again.
Further, as shown in fig. 7, the target tag obtaining module 20 includes: an image segmentation unit 201 and a first label acquisition unit 202;
an image dividing unit 201, configured to identify a target object contour in the target image data, and divide the target image data into at least one unit target image data according to the target object contour;
a first tag obtaining unit 202, configured to input the unit target image data into the convolutional neural network model, obtain first tag information that matches the unit target image data in the convolutional neural network model, and add the first tag information to the target tag information set.
For specific functional implementation manners of the image segmentation unit 201 and the first tag obtaining unit 202, reference may be made to S201 to S202 in the embodiment corresponding to fig. 3, which is not described herein again.
Further, as shown in fig. 7, the special effect processing module 30 includes: a first extracting unit 301, a first adding unit 302, a display unit 303, a second extracting unit 304, a second adding unit 305;
a first extracting unit 301, configured to extract, in the image special effects library, target special effects processing information corresponding to the first tag information in the target tag information set;
a first adding unit 302, configured to search a first image area where the unit target image data is located, and add a multimedia material in target special effect processing information corresponding to the first tag information to the first image area;
a display unit 303 configured to determine target image data to which the multimedia material has been added as the target special effect image data, and display the target special effect image data;
the display unit 303 is specifically configured to, if the target special effect processing information further includes an image adjustment parameter, perform image parameter adjustment on the target image data according to the image adjustment parameter, determine the target image data to which the multimedia material is added and to which the parameter is adjusted as the target special effect image data, and display the target special effect image data.
A second extracting unit 304, configured to extract, in the image special effects library, target special effects processing information corresponding to the second tag information in the target tag information set;
a second adding unit 305, configured to identify a second image area in the target image data, where the second image area is associated with the auxiliary information, and add a multimedia material in target special effects processing information corresponding to the second tag information to the second image area.
For specific functional implementation manners of the first extracting unit 301, the first adding unit 302, the displaying unit 303, the second extracting unit 304, and the second adding unit 305, reference may be made to steps S303 to S305 in the embodiment corresponding to fig. 4 and steps S406 to S409 in the embodiment corresponding to fig. 5, which is not described herein again.
Further, please refer to fig. 8, which is a schematic structural diagram of a first tag obtaining unit according to an embodiment of the present invention, and as shown in fig. 8, the first tag obtaining unit 202 includes: a second input subunit 2021, a feature extraction subunit 2022, a comparison subunit 2023, and a determination subunit 2024;
a second input subunit 2021, configured to input the unit target image data into the convolutional neural network model;
specifically, in order to improve the accuracy of image recognition, the second input subunit 2021 may adjust the divided unit target image data to a fixed size, and then input the adjusted unit target image data to an input layer in the convolutional neural network model. The second input subunit 2021 may randomly input the divided unit target image data into the convolutional neural network model, or sequentially input the unit target image data into the convolutional neural network model according to the order of acquiring the unit target image data. It is understood that the above-described convolutional neural network has been constructed in advance, and the parameter size of the input layer is equal to the size of the resized unit target image data.
A feature extraction subunit 2022, configured to extract a target feature of the unit target image data through convolution operation and pooling operation;
specifically, after the second input subunit 2021 inputs the unit target image data to the output layer of the convolutional neural network, and then enters the convolutional layer, the feature extraction subunit 2022 first selects a small block of the unit target image data as a sample at random, learns some features from the small sample, and then uses the sample as a window to sequentially slide through all pixel regions of the unit target image data, that is, the features learned from the sample are convolved with the unit target image data, so as to obtain the most significant features at different positions of the unit target image data. After the convolution operation is completed, the feature extraction subunit 2022 has already extracted the features of the unit target image data, but the number of the features extracted by the convolution operation is large, and in order to reduce the amount of calculation, pooling operation is also required, that is, the features extracted by the convolution operation from the unit target image data are transmitted to a pooling layer, and aggregation statistics is performed on the extracted features, the order of magnitude of the statistical features is far lower than that of the features extracted by the convolution operation, and meanwhile, the classification effect is also improved. The average pooling operation method is to calculate an average characteristic in a characteristic set to represent the characteristic of the characteristic set; the maximum pooling operation is to extract the feature of which the maximum feature represents in a feature set. Through convolution operation and pooling operation, the most obvious target features of unit target image data can be extracted, and the number of the target features is small. It should be noted that the convolutional layer in the convolutional neural network may have only one layer or multiple layers, and the pooling layer may have only one layer or multiple layers.
A comparison subunit 2023, configured to perform similarity comparison between the target feature and the feature set in the convolutional neural network model;
a determining subunit 2024, configured to acquire, in the feature set, tag information corresponding to a feature with the largest similarity as first tag information that matches the unit target image data.
For specific functional implementation manners of the second input subunit 2021, the feature extraction subunit 2022, the comparison subunit 2023, and the determination subunit 2024, reference may be made to S202-S205 in the embodiment corresponding to fig. 3 described above, which is not described herein again.
The embodiment of the invention obtains target image data and inputs the target image data into a convolutional neural network model; acquiring a target label information set matched with the target image data from a convolutional neural network model by utilizing a classification function of the convolutional neural network model, wherein label information in the target label information set is used for marking the feature category of the target image data; and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data. Therefore, in the whole process of carrying out special effect processing on the target image data, the characteristic type of the target image data can be automatically identified without manual participation, and the special effect processing is automatically carried out on the target image data, so that the complicated steps caused by manual image special effect processing can be avoided, and the efficiency of image data processing is improved. And simultaneously, a plurality of characteristic categories in the target image data are identified, and target special effect processing information corresponding to a plurality of label information in the target label information set is respectively executed, so that special effect processing modes can be enriched, and the special effect processing effect is improved.
Fig. 9 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention. As shown in fig. 9, the image processing apparatus 1000 may be applied to the terminal device in the corresponding embodiment of fig. 2, and the image processing apparatus 1000 may include: the processor 1001, the network interface 1004, and the memory 1005, and the image processing apparatus 1000 may further include: a user interface 1003, and at least one communication bus 1002. The communication bus 1002 is used to implement connection communication among these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 1004 may optionally be at least one storage device located remotely from the processor 1001. As shown in fig. 9, the memory 1004, which is a type of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the image processing apparatus 1000 shown in fig. 9, the network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1004 to implement:
acquiring target image data, and inputting the target image data into a convolutional neural network model;
acquiring a target label information set matched with the target image data from the convolutional neural network model, wherein label information in the target label information set is used for marking the characteristic category of the target image data;
and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data.
In one embodiment, when the processor 1001 obtains the target label information set matching the target image data in the convolutional neural network model, specifically performs the following steps:
identifying a target object contour in the target image data, and dividing the target image data into at least one unit of target image data according to the target object contour;
inputting the unit target image data into the convolutional neural network model, acquiring first label information matched with the unit target image data in the convolutional neural network model, and adding the first label information to the target label information set.
In one embodiment, when the processor 1001 performs the inputting of the unit target image data into the convolutional neural network model and acquires the first tag information matching with the unit target image data in the convolutional neural network model, specifically perform the following steps:
inputting the unit target image data into the convolutional neural network model;
specifically, in order to improve the accuracy of image recognition, the segmented unit target image data may be resized to a fixed size, and then the resized unit target image data may be input to the input layer in the convolutional neural network model. The segmented unit target image data can be randomly input into the convolutional neural network model or can be sequentially input into the convolutional neural network model according to the sequence of acquiring the unit target image data. It is understood that the above-described convolutional neural network has been constructed in advance, and the parameter size of the input layer is equal to the size of the resized unit target image data.
Extracting the target characteristics of the unit target image data through convolution operation and pooling operation;
specifically, after the unit target image data is input to the output layer of the convolutional neural network, the unit target image data enters the convolutional layer, a small block in the unit target image data is selected at random as a sample, some features are learned from the small sample, and then the sample is used as a window to sequentially slide through all pixel regions of the unit target image data, that is, the features learned from the sample are convolved with the unit target image data, so that the most significant features on different positions of the unit target image data are obtained. After the convolution operation is completed, the features of the unit target image data are extracted, but the number of the features extracted only through the convolution operation is large, in order to reduce the calculation amount, pooling operation is needed, namely, the features extracted through the convolution operation from the unit target image data are transmitted to a pooling layer, aggregation statistics is carried out on the extracted features, the order of magnitude of the statistical features is far lower than that of the features extracted through the convolution operation, meanwhile, the classification effect is improved, and the commonly used pooling method mainly comprises an average pooling operation method and a maximum pooling operation method. The average pooling operation method is to calculate an average characteristic in a characteristic set to represent the characteristic of the characteristic set; the maximum pooling operation is to extract the feature of which the maximum feature represents in a feature set. Through convolution operation and pooling operation, the most obvious target features of unit target image data can be extracted, and the number of the target features is small. It should be noted that the convolutional layer in the convolutional neural network may have only one layer or multiple layers, and the pooling layer may have only one layer or multiple layers.
Comparing the similarity of the target feature with a feature set in the convolutional neural network model;
specifically, the output layer of the trained convolutional neural network model is a classifier, the number of nodes of the classifier is the same as the number of classes of the sample label information and is also consistent with the number of feature classes of the sample image data, each classification node of the classifier comprises a feature set extracted from the sample image data corresponding to each type of sample label information, and the number of the feature sets is the same as the number of the sample label information classes. After the target features of the unit target image data are extracted, similarity comparison is carried out on the target features and a feature set of an output layer in a convolutional neural network model, the similarity can be measured in a distance measuring mode, if the distance between the two features is shorter, the similarity between the two features is larger, and the distance measuring mode can comprise an Euclidean distance measuring mode, a Mahalanobis distance measuring mode or a Hamming distance measuring mode.
And acquiring label information corresponding to the feature with the maximum similarity in the feature set as first label information matched with the unit target image data.
In one embodiment, when the processor 1001 executes the image special effect library, extracts target special effect processing information corresponding to the target label information set, performs special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displays the target special effect image data, the following steps are specifically executed:
extracting target special effect processing information corresponding to the first label information in the target label information set from the image special effect library;
searching a first image area where the unit target image data is located, and adding a multimedia material in the target special effect processing information corresponding to the first label information to the first image area;
and determining the target image data added with the multimedia material as the target special effect image data, and displaying the target special effect image data.
In one embodiment, the processor 1001 further performs the steps of:
acquiring auxiliary information corresponding to the target image data;
acquiring second label information matched with the auxiliary information, and adding the second label information to the target label information set;
the auxiliary information comprises at least one of an environment parameter, an equipment state parameter and an image remark information keyword;
the processor 1001, before performing the step of determining the target image data to which the multimedia material has been added as the target special effect image data and displaying the target special effect image data, further performs the steps of:
extracting target special effect processing information corresponding to the second label information in the target label information set from the image special effect library;
and identifying a second image area in the target image data, which is associated with the auxiliary information, and adding the multimedia material in the target special effect processing information corresponding to the second label information to the second image area.
In one embodiment, when determining the target image data to which the multimedia material is added as the target special effect image data and displaying the target special effect image data, the processor 1001 specifically performs the following steps:
and if the target special effect processing information further comprises image adjustment parameters, performing image parameter adjustment on the target image data according to the image adjustment parameters, determining the target image data added with the multimedia material and subjected to parameter adjustment as the target special effect image data, and displaying the target special effect image data.
In one embodiment, the processor 1001 further performs the steps of:
if a special effect switching instruction is obtained, randomly selecting target special effect processing information in the image special effect library to serve as random target special effect processing information, and performing special effect updating processing on the target special effect image data according to the random target special effect processing information to obtain updated target special effect image data.
In one embodiment, the processor 1001 further performs the steps of:
acquiring sample image data and sample label information corresponding to the sample image data, wherein the sample label information is used for marking the characteristic category of the sample image data;
and constructing the convolutional neural network model according to the mapping relation between the sample image data and the sample label information.
The embodiment of the invention obtains target image data and inputs the target image data into a convolutional neural network model; acquiring a target label information set matched with the target image data from a convolutional neural network model by utilizing a classification function of the convolutional neural network model, wherein label information in the target label information set is used for marking the feature category of the target image data; and extracting target special effect processing information corresponding to the target label information set from an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data. Therefore, in the whole process of processing the special effect of the target image data, the characteristic type of the target image data can be automatically identified without manual participation, the special effect processing is automatically carried out on the target image data, further, the complicated steps caused by manual image special effect processing can be avoided, and the efficiency of processing the image data is improved. And simultaneously, a plurality of characteristic categories in the target image data are identified, and target special effect processing information corresponding to a plurality of label information in the target label information set is respectively executed, so that special effect processing modes can be enriched, and the special effect processing effect is improved.
It should be understood that the image processing apparatus 1000 described in the embodiment of the present invention may perform the description of the image processing method in the embodiment corresponding to any one of fig. 2 to fig. 6, and may also perform the description of the image processing apparatus 1 in the embodiment corresponding to fig. 7 or fig. 8, which is not repeated herein. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present invention further provides a computer storage medium, where the computer program executed by the image processing apparatus 1 mentioned above is stored in the computer storage medium, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the image processing method in the embodiment corresponding to any one of fig. 2 to fig. 6 can be executed, and therefore, details will not be repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium to which the present invention relates, reference is made to the description of the method embodiments of the present invention.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (14)

1. An image processing method, comprising:
acquiring target image data, and inputting the target image data into a convolutional neural network model;
identifying a target object contour in the target image data, and dividing the target image data into at least one unit target image data according to the target object contour; each unit target image data is an image area having a unique property in the target image data;
in the convolutional neural network model, acquiring first label information matched with the unit target image data, and adding the first label information to a target label information set; the first label information is used for marking the characteristic category of the unit target image data;
and extracting target special effect processing information corresponding to each piece of first label information in the target label information set in an image special effect library, respectively carrying out special effect processing on a first image area where each unit of target image data is located according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data.
2. The method according to claim 1, wherein the obtaining, in the convolutional neural network model, first tag information that matches the unit target image data includes:
inputting the unit target image data into the convolutional neural network model;
extracting the target characteristics of the unit target image data through convolution operation and pooling operation;
comparing the similarity of the target feature with a feature set in the convolutional neural network model;
and acquiring label information corresponding to the feature with the maximum similarity in the feature set as first label information matched with the unit target image data.
3. The method according to claim 2, wherein the performing, according to the target special effect processing information, special effect processing on the first image area where each unit of target image data is located, respectively, to obtain target special effect image data, and displaying the target special effect image data includes:
searching a first image area where the unit target image data is located, and adding a multimedia material in target special effect processing information corresponding to the first label information to the first image area;
and determining the target image data added with the multimedia material as the target special effect image data, and displaying the target special effect image data.
4. The method of claim 3, further comprising:
acquiring auxiliary information corresponding to the target image data;
acquiring second label information matched with the auxiliary information, and adding the second label information to the target label information set;
the auxiliary information comprises at least one of an environment parameter, an equipment state parameter and an image remark information keyword;
before the step of determining the target image data to which the multimedia material is added as the target special effect image data and displaying the target special effect image data, the method further includes:
extracting target special effect processing information corresponding to the second label information in the target label information set from the image special effect library;
and identifying a second image area in the target image data, which is associated with the auxiliary information, and adding the multimedia material in the target special effect processing information corresponding to the second label information to the second image area.
5. The method according to claim 3, wherein the determining target image data to which the multimedia material has been added as the target special effects image data and displaying the target special effects image data comprises:
and if the target special effect processing information further comprises image adjustment parameters, performing image parameter adjustment on the target image data according to the image adjustment parameters, determining the target image data added with the multimedia material and subjected to parameter adjustment as the target special effect image data, and displaying the target special effect image data.
6. The method of claim 1, further comprising:
if a special effect switching instruction is obtained, randomly selecting special effect processing information in the image special effect library as random special effect processing information, and performing special effect updating processing on the target special effect image data according to the random special effect processing information to obtain updated target special effect image data.
7. The method of claim 1, further comprising:
acquiring sample image data and sample label information corresponding to the sample image data, wherein the sample label information is used for marking the characteristic category of the sample image data;
and constructing the convolutional neural network model according to the mapping relation between the sample image data and the sample label information.
8. An image processing apparatus characterized by comprising:
the first input module is used for acquiring target image data and inputting the target image data into a convolutional neural network model;
a target label obtaining module, configured to obtain a target label information set matched with the target image data in the convolutional neural network model, where label information in the target label information set is used to mark a feature type of the target image data;
and the special effect processing module is used for extracting target special effect processing information corresponding to the target label information set in an image special effect library, carrying out special effect processing on the target image data according to the target special effect processing information to obtain target special effect image data, and displaying the target special effect image data.
9. The apparatus of claim 8, wherein the target tag obtaining module comprises:
an image segmentation unit, configured to identify a target object contour in the target image data, and segment the target image data into at least one unit target image data according to the target object contour;
a first tag obtaining unit, configured to input the unit target image data into the convolutional neural network model, obtain first tag information that matches the unit target image data in the convolutional neural network model, and add the first tag information to the target tag information set.
10. The apparatus of claim 9, wherein the first tag obtaining unit comprises:
a second input subunit, configured to input the unit target image data into the convolutional neural network model;
a feature extraction subunit, configured to extract a target feature of the unit target image data through convolution operation and pooling operation;
the comparison subunit is used for carrying out similarity comparison on the target feature and a feature set in the convolutional neural network model;
and the determining subunit is configured to acquire, in the feature set, tag information corresponding to a feature with the largest similarity as first tag information matched with the unit target image data.
11. The apparatus of claim 10, wherein the special effects processing module comprises:
a first extracting unit configured to extract, in the image special effect library, target special effect processing information corresponding to the first tag information in the target tag information set;
the first adding unit is used for searching a first image area where the unit target image data is located and adding a multimedia material in the target special effect processing information corresponding to the first label information to the first image area;
and the display unit is used for determining the target image data added with the multimedia material as the target special effect image data and displaying the target special effect image data.
12. The apparatus of claim 11, further comprising:
the auxiliary information acquisition module is used for acquiring auxiliary information corresponding to the target image data;
the second tag acquisition module is used for acquiring second tag information matched with the auxiliary information and adding the second tag information to the target tag information set;
the auxiliary information comprises at least one of an environment parameter, an equipment state parameter and an image remark information keyword;
the special effect processing module further comprises:
a second extraction unit configured to extract, in the image special effect library, target special effect processing information corresponding to the second tag information in the target tag information set;
and the second adding unit is used for identifying a second image area in the target image data, which is associated with the auxiliary information, and adding the multimedia material in the target special effect processing information corresponding to the second label information to the second image area.
13. An image processing apparatus characterized by comprising: a processor and a memory;
the processor is coupled to a memory, wherein the memory is configured to store program code and the processor is configured to invoke the program code to perform the method of any of claims 1-7.
14. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by a processor, perform the method according to any one of claims 1-7.
CN201711243948.XA 2017-11-30 2017-11-30 Image processing method and device Active CN107993191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711243948.XA CN107993191B (en) 2017-11-30 2017-11-30 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711243948.XA CN107993191B (en) 2017-11-30 2017-11-30 Image processing method and device

Publications (2)

Publication Number Publication Date
CN107993191A CN107993191A (en) 2018-05-04
CN107993191B true CN107993191B (en) 2023-03-21

Family

ID=62034835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711243948.XA Active CN107993191B (en) 2017-11-30 2017-11-30 Image processing method and device

Country Status (1)

Country Link
CN (1) CN107993191B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805838B (en) * 2018-06-05 2021-03-02 Oppo广东移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN108765278B (en) * 2018-06-05 2023-04-07 Oppo广东移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN108764370B (en) * 2018-06-08 2021-03-12 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN108846351A (en) * 2018-06-08 2018-11-20 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN108765423B (en) * 2018-06-20 2020-07-28 北京七鑫易维信息技术有限公司 Convolutional neural network training method and device
CN109102484B (en) * 2018-08-03 2021-08-10 北京字节跳动网络技术有限公司 Method and apparatus for processing image
CN109525872B (en) * 2018-09-10 2022-01-04 杭州芯讯科技有限公司 Display screen preview system and preview method
CN109325443B (en) * 2018-09-19 2021-09-17 南京航空航天大学 Face attribute identification method based on multi-instance multi-label deep migration learning
CN109151318B (en) * 2018-09-28 2020-12-15 成都西纬科技有限公司 Image processing method and device and computer storage medium
CN109389660A (en) * 2018-09-28 2019-02-26 百度在线网络技术(北京)有限公司 Image generating method and device
CN111107259B (en) * 2018-10-25 2021-10-08 阿里巴巴集团控股有限公司 Image acquisition method and device and electronic equipment
CN109167936A (en) * 2018-10-29 2019-01-08 Oppo广东移动通信有限公司 A kind of image processing method, terminal and storage medium
CN109710255B (en) * 2018-12-24 2022-07-12 网易(杭州)网络有限公司 Special effect processing method, special effect processing device, electronic device and storage medium
CN110035227A (en) * 2019-03-25 2019-07-19 维沃移动通信有限公司 Special effect display methods and terminal device
CN110163810B (en) * 2019-04-08 2023-04-25 腾讯科技(深圳)有限公司 Image processing method, device and terminal
CN110008922B (en) * 2019-04-12 2023-04-18 腾讯科技(深圳)有限公司 Image processing method, device, apparatus, and medium for terminal device
CN110377768B (en) * 2019-06-10 2022-03-08 万翼科技有限公司 Intelligent graph recognition system and method
CN112116690B (en) * 2019-06-19 2023-07-07 腾讯科技(深圳)有限公司 Video special effect generation method, device and terminal
CN110708594B (en) * 2019-09-26 2022-03-29 三星电子(中国)研发中心 Content image generation method and system
CN110807728B (en) 2019-10-14 2022-12-13 北京字节跳动网络技术有限公司 Object display method and device, electronic equipment and computer-readable storage medium
CN113139893B (en) * 2020-01-20 2023-10-03 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN111368127B (en) * 2020-03-06 2023-03-24 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111667562B (en) * 2020-05-07 2023-07-28 深圳思为科技有限公司 Picture material-based dynamic effect interface generation method and device
CN111626258B (en) * 2020-06-03 2024-04-16 上海商汤智能科技有限公司 Sign-in information display method and device, computer equipment and storage medium
CN112927349B (en) * 2021-02-22 2024-03-26 北京市商汤科技开发有限公司 Three-dimensional virtual special effect generation method and device, computer equipment and storage medium
CN113473017A (en) * 2021-07-01 2021-10-01 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN113473019A (en) * 2021-07-01 2021-10-01 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN113873168A (en) * 2021-10-27 2021-12-31 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium
CN114416260A (en) * 2022-01-20 2022-04-29 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008059006A (en) * 2006-08-29 2008-03-13 Dainippon Printing Co Ltd Image synthesizing apparatus, program, recording medium
CN103810504A (en) * 2014-01-14 2014-05-21 三星电子(中国)研发中心 Image processing method and device
CN105049959A (en) * 2015-07-08 2015-11-11 腾讯科技(深圳)有限公司 Multimedia file playing method and device
CN107025457A (en) * 2017-03-29 2017-08-08 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107220667A (en) * 2017-05-24 2017-09-29 北京小米移动软件有限公司 Image classification method, device and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008059006A (en) * 2006-08-29 2008-03-13 Dainippon Printing Co Ltd Image synthesizing apparatus, program, recording medium
CN103810504A (en) * 2014-01-14 2014-05-21 三星电子(中国)研发中心 Image processing method and device
CN105049959A (en) * 2015-07-08 2015-11-11 腾讯科技(深圳)有限公司 Multimedia file playing method and device
CN107025457A (en) * 2017-03-29 2017-08-08 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107220667A (en) * 2017-05-24 2017-09-29 北京小米移动软件有限公司 Image classification method, device and computer-readable recording medium

Also Published As

Publication number Publication date
CN107993191A (en) 2018-05-04

Similar Documents

Publication Publication Date Title
CN107993191B (en) Image processing method and device
CN109173263B (en) Image data processing method and device
US10956784B2 (en) Neural network-based image manipulation
US10109051B1 (en) Item recommendation based on feature match
CN107959883B (en) Video editing and pushing method and system and intelligent mobile terminal
CN105354248B (en) The recognition methods of distributed image low-level image feature and system based on gray scale
CN106599925A (en) Plant leaf identification system and method based on deep learning
CN106933867B (en) Image query method and device
CN106096542B (en) Image video scene recognition method based on distance prediction information
CN108121957A (en) The method for pushing and device of U.S. face material
CN105117399B (en) Image searching method and device
CN112328823A (en) Training method and device for multi-label classification model, electronic equipment and storage medium
CN110097616B (en) Combined drawing method and device, terminal equipment and readable storage medium
Dantone et al. Augmented faces
CN107315984B (en) Pedestrian retrieval method and device
CN108509567B (en) Method and device for building digital culture content library
CN113762309A (en) Object matching method, device and equipment
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
CN112200844A (en) Method, device, electronic equipment and medium for generating image
US11200650B1 (en) Dynamic image re-timing
CN106557489B (en) Clothing searching method based on mobile terminal
EP3748460A1 (en) Search system, search method, and program
CN112069342A (en) Image classification method and device, electronic equipment and storage medium
CN112069335A (en) Image classification method and device, electronic equipment and storage medium
CN117132690A (en) Image generation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant