CN108764208B - Image processing method and device, storage medium and electronic equipment - Google Patents

Image processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108764208B
CN108764208B CN201810585679.3A CN201810585679A CN108764208B CN 108764208 B CN108764208 B CN 108764208B CN 201810585679 A CN201810585679 A CN 201810585679A CN 108764208 B CN108764208 B CN 108764208B
Authority
CN
China
Prior art keywords
image
label
detected
training
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810585679.3A
Other languages
Chinese (zh)
Other versions
CN108764208A (en
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810585679.3A priority Critical patent/CN108764208B/en
Publication of CN108764208A publication Critical patent/CN108764208A/en
Priority to PCT/CN2019/089914 priority patent/WO2019233394A1/en
Application granted granted Critical
Publication of CN108764208B publication Critical patent/CN108764208B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image processing method and device, electronic equipment and a computer readable storage medium, wherein an image to be detected is obtained, scene recognition is carried out on the image to be detected according to a multi-label classification model, a label corresponding to the image to be detected is obtained, and the multi-label classification model is obtained according to a multi-label image containing various scene elements. And outputting the label corresponding to the image to be detected as a scene identification result. Since the multi-label classification model is a scene recognition model obtained from a multi-label image containing multiple scene elements, labels corresponding to multiple scenes in the image can be directly and accurately output after scene recognition is performed on the image containing different scene elements. Therefore, the accuracy of scene recognition of images containing different scene elements is improved, and the efficiency of scene recognition is improved at the same time.

Description

Image processing method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
With the popularization of mobile terminals and the rapid development of mobile internet, the usage amount of users of mobile terminals is increasing. The photographing function in the mobile terminal has become one of the functions commonly used by the user. During or after the photographing, the mobile terminal may perform scene recognition on the image to provide an intelligent experience for the user.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a storage medium and electronic equipment, which can improve the accuracy of scene recognition on an image.
An image processing method comprising:
acquiring an image to be detected;
carrying out scene recognition on the image to be detected according to a multi-label classification model to obtain a label corresponding to the image to be detected, wherein the multi-label classification model is obtained according to a multi-label image containing various scene elements;
and outputting the label corresponding to the image to be detected as a scene identification result.
An image processing apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an image to be detected;
the scene recognition module is used for carrying out scene recognition on the image to be detected according to a multi-label classification model to obtain a label corresponding to the image to be detected, and the multi-label classification model is obtained according to a multi-label image containing various scene elements;
and the output module is used for outputting the label corresponding to the image to be detected as a scene identification result.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method as described above.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor performing the steps of the image processing method as described above when executing the computer program.
The scene recognition method and device, the storage medium and the electronic equipment acquire the image to be detected, perform scene recognition on the image to be detected according to the multi-label classification model to obtain the label corresponding to the image to be detected, wherein the multi-label classification model is obtained according to the multi-label image containing various scene elements. And outputting the label corresponding to the image to be detected as a scene identification result. Because the multi-label classification model is a scene recognition model obtained according to the multi-label image containing various scene elements, labels corresponding to a plurality of scenes in the image to be detected can be directly and accurately output after scene recognition is carried out on the image to be detected containing different scene elements. Therefore, the accuracy of scene recognition of images containing different scene elements is improved, and the efficiency of scene recognition is improved at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of the internal structure of an electronic device in one embodiment;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3A is a flow chart of a method of image processing in yet another embodiment;
FIG. 3B is a schematic diagram of an embodiment of a neural network;
FIG. 4 is a flowchart of a method for performing scene recognition on an image according to the multi-label classification model to obtain labels corresponding to the image in FIG. 2;
FIG. 5 is a flowchart of an image processing method in yet another embodiment;
FIG. 6 is a diagram showing a configuration of an image processing apparatus according to an embodiment;
FIG. 7 is a schematic diagram showing a configuration of an image processing apparatus according to still another embodiment;
FIG. 8 is a schematic diagram of the scene recognition module of FIG. 6;
fig. 9 is a block diagram of a partial structure of a cellular phone related to an electronic device provided in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and at least one computer program is stored on the memory, and can be executed by the processor to realize the image processing method suitable for the electronic device provided by the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
In one embodiment, as shown in fig. 2, an image processing method is provided, which is described by taking the method as an example applied to the electronic device in fig. 1, and includes:
step 220, obtaining an image to be detected.
The user uses the electronic equipment (with the photographing function) to photograph, and the image to be detected is obtained. The image to be detected can be a picture preview picture or a picture stored in the electronic equipment after being photographed. The image to be detected refers to an image that needs scene recognition, and includes both an image including only a single scene element and an image including a plurality of scene elements (two or more). Scene elements in an image typically include scenery, beach, blue sky, green grass, snow scenery, night scenery, darkness, backlighting, sunrise/sunset, fireworks, spotlights, rooms, microspurs, text documents, figures, babies, cats, dogs, gourmet, etc. Of course, the above is not exhaustive and many other categories of scene elements are also included.
And 240, carrying out scene recognition on the image to be detected according to the multi-label classification model to obtain a label corresponding to the image to be detected, wherein the multi-label classification model is obtained according to the multi-label image containing various scene elements.
And after the image to be detected is obtained, carrying out scene recognition on the image to be detected. Specifically, scene recognition is performed on the image by adopting a pre-trained multi-label classification model, and labels corresponding to scenes contained in the image are obtained. The multi-label classification model is obtained according to a multi-label image containing a plurality of scene elements. That is, the multi-label classification model is a scene recognition model obtained after performing scene recognition training using an image including a plurality of scene elements. And carrying out scene recognition on the image to be detected through the multi-label classification model to obtain a label corresponding to a scene contained in the image to be detected. For example, a multi-label classification model is used for carrying out scene recognition on an image to be detected which simultaneously comprises a plurality of scene elements such as a beach, a blue sky and a portrait, and then labels of the image to be detected can be directly output as the beach, the blue sky and the portrait. The beach, the blue sky and the portrait are labels corresponding to scenes in the image to be detected.
And step 260, outputting the label corresponding to the image to be detected as a scene identification result.
After scene recognition is carried out on the image to be detected through the multi-label classification model to obtain a label corresponding to a scene contained in the image to be detected, the label corresponding to the image to be detected is a scene recognition result. And outputting a scene recognition result.
In the embodiment of the application, the image needing scene recognition is obtained, the scene recognition is carried out on the image to be detected according to the multi-label classification model, the label corresponding to the image to be detected is obtained, and the multi-label classification model is obtained according to the multi-label image containing various scene elements. And outputting the label corresponding to the image to be detected as a scene identification result. Since the multi-label classification model is a scene recognition model obtained from a multi-label image containing multiple scene elements, labels corresponding to multiple scenes in the image can be directly and accurately output after scene recognition is performed on the image containing different scene elements. Therefore, the accuracy of scene recognition of images containing different scene elements is improved, and the efficiency of scene recognition is improved at the same time.
In one embodiment, as shown in fig. 3A, before acquiring the image to be detected, the method includes:
step 320, acquiring a multi-label image containing a plurality of scene elements.
The image including the multiple scene elements is acquired, and is referred to as a multi-label image in this embodiment, because after the image including the multiple scenes is subjected to scene recognition, each scene corresponds to one label, and all the labels constitute the labels of the image, that is, the multi-label image.
Step 340, training a multi-label classification model by using the multi-label image containing the scene elements.
The method comprises the steps of obtaining a plurality of multi-label image samples, carrying out scene recognition on the multi-label image samples manually in advance, and obtaining labels corresponding to the multi-label image samples, wherein the labels are called standard labels. And then, performing scene recognition training one by adopting the images in the multi-label image sample until the error between the trained scene recognition result and the standard label is smaller and smaller. And the multi-label classification model which can realize scene recognition on the multi-label image is obtained after training.
In the embodiment of the application, because the multi-label classification model is a scene recognition model obtained by training a multi-label image containing multiple scene elements, labels corresponding to multiple scenes in the image can be directly and accurately output after scene recognition is performed on the image containing different scene elements. The accuracy of multi-label image identification is improved, and the efficiency of multi-label image identification is improved.
In one embodiment, the multi-label classification model is constructed based on a neural network model.
The specific training method of the multi-label classification model comprises the following steps: inputting a training image containing a background training target and a foreground training target into a neural network to obtain a first loss function reflecting the difference between a first prediction confidence and a first real confidence of each pixel point in a background area in the training image and a second loss function reflecting the difference between a second prediction confidence and a second real confidence of each pixel point in a foreground area in the training image; the first prediction confidence coefficient is the confidence coefficient that a certain pixel point in a background area in a training image predicted by adopting a neural network belongs to a background training target, and the first real confidence coefficient represents the confidence coefficient that a pixel point labeled in advance in the training image belongs to the background training target; the second prediction confidence coefficient is the confidence coefficient that a certain pixel point in a foreground region in the training image predicted by adopting the neural network belongs to the foreground training target, and the second real confidence coefficient represents the confidence coefficient that a pixel point labeled in advance in the training image belongs to the foreground training target;
weighting and summing the first loss function and the second loss function to obtain a target loss function;
and adjusting parameters of the neural network according to the target loss function, and training the neural network to finally obtain the multi-label classification model. The background training target of the training image has a corresponding label, and the foreground training target also has a label.
FIG. 3B is a block diagram of a neural network model in accordance with an embodiment. As shown in fig. 3B, an input layer of the neural network receives a training image with an image category label, performs feature extraction through a basic network (e.g., a CNN network), outputs the extracted image features to a feature layer, performs category detection on a background training target by the feature layer to obtain a first loss function, performs category detection on a foreground training target according to the image features to obtain a second loss function, performs position detection on the foreground training target according to a foreground region to obtain a position loss function, and performs weighted summation on the first loss function, the second loss function, and the position loss function to obtain a target loss function. The neural network may be a convolutional neural network. The convolutional neural network comprises a data input layer, a convolutional calculation layer, an activation layer, a pooling layer and a full-link layer. The data input layer is used for preprocessing the original image data. The preprocessing may include de-averaging, normalization, dimensionality reduction, and whitening processing. Deaveraging refers to centering the input data to 0 for each dimension in order to pull the center of the sample back to the origin of the coordinate system. Normalization is to normalize the amplitude to the same range. Whitening refers to normalizing the amplitude on each characteristic axis of the data. The convolution computation layer is used for local correlation and window sliding. The weights of each filter connection data window in the convolution calculation layer are fixed, each filter pays attention to one image feature, such as vertical edge, horizontal edge, color, texture and the like, and the filters are combined to obtain a feature extractor set of the whole image. One filter is a weight matrix. The convolution can be performed with the data in different windows through a weight matrix. The activation layer is used for carrying out nonlinear mapping on the convolution layer output result. The activation function used by The activation layer may be ReLU (The Rectified Linear Unit). A pooling layer may be sandwiched between successive convolutional layers for compressing the amount of data and parameters, reducing overfitting. The pooling layer may employ a maximum or mean method to dimensionality-reduce the data. The fully connected layer is positioned at the tail part of the convolutional neural network, and all neurons between the two layers are connected in a weighted mode. And one part of convolutional layers of the convolutional neural network are cascaded to a first confidence coefficient output node, one part of convolutional layers are cascaded to a second confidence coefficient output node, one part of convolutional layers are cascaded to a position output node, the classification of the background of the image can be detected according to the first confidence coefficient output node, the classification of the foreground target of the image can be detected according to the second confidence coefficient output node, and the position corresponding to the foreground target can be detected according to the position output node.
In particular, Artificial Neural Networks (ans), also referred to as Neural Networks (NNs) or as Connection models (Connection models), are used for short. The method abstracts the human brain neuron network from the information processing angle, establishes a certain simple model, and forms different networks according to different connection modes. It is also often directly referred to in engineering and academia as neural networks or neural-like networks. It is understood that an artificial neural network is a mathematical model that uses a structure similar to brain neurosynaptic connections for information processing.
Neural networks are commonly used for classification, e.g., for spam recognition classification, for cat and dog recognition classification in images, etc. Such machines that automatically classify input variables are called classifiers. The input to the classifier is a vector of values, called features (vectors). Before the classifier is used, the classifier needs to be trained, namely, the neural network needs to be trained firstly.
Training of artificial neural networks relies on back-propagation algorithms. At first, feature vectors are input into an input layer, output is obtained through network calculation, the output layer finds that the output is inconsistent with a correct class number, then the last layer of neuron adjusts parameters, the last layer of neuron not only adjusts parameters of the last layer of neuron, but also leads the second last layer of neuron connected with the last layer of neuron to adjust parameters of the last layer of neuron, and therefore the last layer of neuron can back and forth to adjust the parameters. The adjusted network will continue testing on the sample and if the output is still erroneous, continue the next round of adjustment back until the result output by the neural network is as consistent as possible with the correct result.
In an embodiment of the application, the neural network model comprises an input layer, a hidden layer and an output layer. Extracting a characteristic vector from a multi-label image containing various scene elements, inputting the characteristic vector into a hidden layer to calculate the size of a loss function, and adjusting parameters of a neural network model according to the loss function to enable the loss function to be converged continuously, thereby realizing training of the neural network model to obtain a multi-label classification model. The multi-label classification model can realize scene recognition on the input image to obtain labels of each scene contained in the image, and the labels are output as a scene recognition result. The target loss function is obtained by weighted summation of the first loss function corresponding to the background training target and the second loss function corresponding to the foreground training target, and parameters of the neural network are adjusted according to the target loss function, so that the labels of the background class and the foreground target can be simultaneously identified subsequently by the multi-label classification model obtained by training, more information is obtained, and the identification efficiency is improved.
In an embodiment, as shown in fig. 4, step 240, performing scene recognition on an image to be detected according to a multi-label classification model to obtain a label corresponding to the image to be detected, includes:
step 242, performing scene recognition on the image to be detected according to the multi-label classification model to obtain an initial label of the image to be detected and a confidence coefficient corresponding to the initial label;
step 244, determining whether the confidence of the initial label is greater than a preset threshold;
and step 246, if so, taking the initial label with the confidence coefficient larger than the preset threshold value as the label corresponding to the image to be detected.
With the multi-label classification model obtained through training, there may be a certain error in the output of the image scene recognition in practice, and therefore, the error needs to be further reduced. Generally, if scene recognition is performed on an image to be detected including multiple scene elements by using the multi-label classification model obtained through the training, multiple initial labels of the image to be detected and confidence degrees corresponding to the initial labels are obtained. For example, scene recognition is performed on an image to be detected including a beach, a blue sky and a portrait, the confidence that the initial label of the image to be detected is the beach is 0.6, the confidence that the initial label of the image to be detected is the blue sky is 0.7, the confidence that the initial label of the image to be detected is the portrait is 0.8, the confidence that the initial label of the image to be detected is the dog is 0.4, and the confidence that the initial label of the image to be detected is the snowscape is 0.3.
And then screening the initial tags of the recognition result, specifically, judging whether the confidence of the initial tags is greater than a preset threshold value. The preset threshold may be a confidence threshold obtained when the loss function is relatively small and the obtained result is relatively close to the actual result according to a large number of training samples when the multi-label classification model is trained in the previous period. For example, if the confidence threshold obtained from a large number of training samples is 0.5, in the above example, it is determined whether the confidence of the initial label is greater than the preset threshold, and the initial label with the confidence greater than the preset threshold is used as the label corresponding to the image. The labels corresponding to the obtained image to be detected are beach, blue sky and portrait, and two interference items of dogs and snowscapes with confidence coefficients lower than a threshold value are abandoned.
In the embodiment of the application, scene recognition is carried out on the image to be detected according to the multi-label classification model, so that the initial label of the image to be detected and the confidence coefficient corresponding to the initial label are obtained. Because the initial label obtained by scene recognition is not necessarily the real label corresponding to the image to be detected, the confidence of each initial label is adopted to screen the initial label, and the initial label larger than the confidence threshold is screened out as the scene recognition result corresponding to the image to be detected. Therefore, the accuracy of the scene recognition result is improved to a certain extent.
In one embodiment, each initial tag corresponds to a confidence level in the range of [0,1 ].
Specifically, the multi-label classification model is a scene recognition model obtained by training a multi-label image containing multiple scene elements, so that labels respectively corresponding to multiple scenes in the image to be detected can be directly and accurately output after scene recognition is performed on the image to be detected containing different scene elements. The identification process for each label in the multi-label classification model is independent, so the probability of each identified label can be between [0,1 ]. In the embodiment of the application, the identification processes of different labels are not mutually influenced, so that all scenes contained in the image to be detected can be comprehensively identified, and omission is avoided.
In one embodiment, as shown in fig. 5, after outputting the label corresponding to the image to be detected as the result of scene recognition, the method includes:
step 520, acquiring position information of the image to be detected during shooting;
and 540, correcting the scene recognition result according to the position information to obtain a corrected scene recognition final result.
Specifically, the electronic device generally records the location of each shot, and generally records address information by using a Global Positioning System (GPS). And acquiring address information recorded by the electronic equipment. After the address information recorded by the electronic equipment is acquired, the position information of the image to be detected is acquired according to the address information. Matching corresponding scene types and weight values corresponding to the scene types for different address information in advance. Specifically, the corresponding scene type and the weight corresponding to the scene type may be correspondingly matched for different address information according to a result obtained by performing statistical analysis on a large number of image materials. For example, it is found from statistical analysis of a large number of image materials that when the address information shows "XXX steppe", the weight of "green grass" corresponding to the "steppe" is 9, the weight of "snow scene" is 7, the weight of "landscape" is 4, the weight of "blue sky" is 6, the weight of "beach" is-8, and the value range of the weight is [ -10,10 ]. A larger weight indicates a larger probability of the scene appearing in the image, and a smaller weight indicates a smaller probability of the scene appearing in the image. Therefore, the scene recognition result can be corrected according to the address information during image shooting and the probability of the scene corresponding to the address information, and the corrected scene recognition final result can be obtained. For example, if the address information of a picture is "XXX prairie", then the scenes corresponding to this "XXX prairie" are "green grass", "snow scene", "blue sky" with higher weight, and the probability of these scenes occurring is higher. Therefore, the result of scene recognition is corrected, and if the above-mentioned "green grass", "snow scene", and "blue sky" appear in the result of scene recognition, it can be used as the final result of scene recognition. If the scene of the 'beach' appears in the scene recognition result, the 'beach' scene is filtered according to the address information during image shooting, the 'beach' scene is removed, and the situation that the scene is incorrect and does not accord with the reality is avoided.
In the embodiment of the application, the position information of the image to be detected during shooting is obtained, and the result of scene recognition is corrected according to the position information to obtain the final result of the corrected scene recognition. The scene type of the image to be detected, which is acquired through the shooting address information of the image to be detected, can be used for calibrating the scene identification result, so that the accuracy of scene detection is finally improved.
In one embodiment, after outputting the label corresponding to the image to be detected as the result of scene recognition, the method further includes:
and performing image processing corresponding to the scene recognition result on the image to be detected according to the scene recognition result.
In the embodiment of the application, after the image to be detected is subjected to scene recognition through the multi-label classification model, the label corresponding to the image to be detected is obtained, and the label corresponding to the image to be detected is output as a scene recognition result. The result of scene recognition can be used as the basis for image post-processing, and the image to be detected can be subjected to targeted image processing according to the result of scene recognition, so that the quality of the image is greatly improved. For example, if the scene type of the image to be detected is identified as a night scene, the image may be processed in a processing mode suitable for the night scene, such as increasing brightness. If the scene type of the image to be detected is identified as the backlight, the image can be processed by adopting a processing mode suitable for the backlight. Of course, if the scene type of the image to be detected is identified as multi-label, for example, including beach, green grass and blue sky, the processing method suitable for the beach can be respectively adopted for the beach area, the processing method suitable for the green grass is adopted for the green grass area, and the processing method suitable for the blue sky is adopted for the blue sky to respectively perform image processing, so that the effect of the whole image is very good.
In a specific embodiment, an image processing method is provided, which is described by taking the application of the method to the electronic device in fig. 1 as an example, and includes:
acquiring a multi-label image containing various scene elements, and training a neural network model by using the multi-label image containing various scene elements to obtain a multi-label classification model, namely the multi-label classification model is based on a neural network framework;
secondly, performing scene recognition on the image to be detected according to the multi-label classification model to obtain an initial label of the image to be detected and a confidence coefficient corresponding to the initial label;
judging whether the confidence coefficient of the initial label is greater than a preset threshold value, if so, taking the initial label with the confidence coefficient greater than the preset threshold value as a label corresponding to the image to be detected, and outputting the label corresponding to the image to be detected as a scene recognition result;
acquiring position information of the image to be detected during shooting, and correcting a scene recognition result according to the position information to obtain a corrected scene recognition final result;
and fifthly, performing image processing corresponding to the scene recognition result on the image to be detected according to the scene recognition result to obtain a processed image.
In the embodiment of the application, because the multi-label classification model is a scene recognition model obtained according to a multi-label image containing multiple scene elements, labels corresponding to multiple scenes in the image can be directly and accurately output after scene recognition is performed on an image to be detected containing different scene elements. Therefore, the accuracy of scene recognition of the to-be-detected images containing different scene elements is improved, and the efficiency of scene recognition is improved. And correcting the scene recognition result according to the position information of the image to be detected during shooting to obtain the corrected final scene recognition result. The scene type of the image to be detected, which is acquired through the shooting address information of the image to be detected, can be used for calibrating the scene identification result, so that the accuracy of scene detection is finally improved. And the result of scene recognition can be used as the basis of image post-processing, and the image can be subjected to targeted image processing according to the result of scene recognition, so that the quality of the image is greatly improved.
In one embodiment, as shown in fig. 6, there is provided an image processing apparatus 600, the apparatus comprising: an image acquisition module 610, a scene recognition module 620, and an output module 630. Wherein,
an image obtaining module 610, configured to obtain an image to be detected;
the scene recognition module 620 is configured to perform scene recognition on an image to be detected according to a multi-label classification model, so as to obtain a label corresponding to the image to be detected, where the multi-label classification model is obtained according to a multi-label image including multiple scene elements;
and the output module 630 is configured to output a label corresponding to the image to be detected as a result of scene recognition.
In one embodiment, as shown in fig. 7, there is provided an image processing apparatus 600, the apparatus further comprising:
a multi-label image obtaining module 640, configured to obtain a multi-label image including multiple scene elements;
a multi-label classification model training module 650 for training a multi-label classification model using a multi-label image containing a plurality of scene elements.
In one embodiment, as shown in FIG. 8, the scene recognition module 620 includes:
an initial label obtaining module 622, configured to perform scene recognition on an image to be detected according to the multi-label classification model, so as to obtain an initial label of the image to be detected and a confidence corresponding to the initial label;
a judging module 624, configured to judge whether the confidence of the initial tag is greater than a preset threshold;
and an image tag generating module 626, configured to, if yes, use the initial tag with the confidence coefficient greater than the preset threshold as the tag corresponding to the image to be detected.
In one embodiment, an image processing apparatus 600 is provided, which is further configured to acquire position information at the time of photographing an image to be detected; and correcting the scene recognition result according to the position information to obtain a corrected final scene recognition result.
In one embodiment, an image processing apparatus 600 is provided, which is further configured to perform image processing corresponding to a scene recognition result on an image to be detected according to the scene recognition result.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of the image processing method provided by the above embodiments.
In one embodiment, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the image processing method provided in the above embodiments are implemented.
The embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to execute the steps of the image processing method provided in the foregoing embodiments.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technique related to the embodiments of the present application are shown.
As shown in fig. 9, the image processing circuit includes an ISP processor 940 and a control logic 950. The image data captured by the imaging device 910 is first processed by the ISP processor 940, and the ISP processor 940 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 910. The imaging device 910 may include a camera having one or more lenses 912 and an image sensor 914. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 940. The sensor 920 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 940 based on the type of interface of the sensor 920. The sensor 920 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, image sensor 914 may also send raw image data to sensor 920, sensor 920 may provide raw image data to ISP processor 940 based on the type of interface of sensor 920, or sensor 920 may store raw image data in image memory 930.
The ISP processor 940 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 940 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 940 may also receive image data from image memory 930. For example, the sensor 920 interface sends raw image data to the image memory 930, and the raw image data in the image memory 930 is then provided to the ISP processor 940 for processing. The image Memory 930 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 914 interface or from sensor 920 interface or from image memory 930, ISP processor 940 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 930 for additional processing before being displayed. ISP processor 940 receives the processed data from image memory 930 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 940 may be output to display 970 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of ISP processor 940 may also be sent to image memory 930 and display 970 may read image data from image memory 930. In one embodiment, image memory 930 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 940 may be transmitted to an encoder/decoder 960 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on a display 970 device. The encoder/decoder 960 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 940 may be transmitted to the control logic 950 unit. For example, the statistical data may include image sensor 914 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 912 shading correction, and the like. The control logic 950 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 910 and control parameters of the ISP processor 940 based on the received statistical data. For example, the control parameters of imaging device 910 may include sensor 920 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 912 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 912 shading correction parameters.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring an image to be detected;
scene recognition is carried out on the image to be detected according to a multi-label classification model, a plurality of labels corresponding to the image to be detected are directly obtained, and the multi-label classification model is obtained according to a multi-label image containing a plurality of scene elements; the multi-label classification model is constructed based on a neural network model, and the specific training method of the multi-label classification model comprises the following steps: inputting a training image containing a background training target and a foreground training target into a neural network to obtain a first loss function reflecting the difference between a first prediction confidence and a first real confidence of each pixel point in a background area in the training image and a second loss function reflecting the difference between a second prediction confidence and a second real confidence of each pixel point in a foreground area in the training image; the first prediction confidence coefficient is the confidence coefficient that a certain pixel point in a background area in a training image predicted by adopting a neural network belongs to a background training target, and the first real confidence coefficient represents the confidence coefficient that a pixel point labeled in advance in the training image belongs to the background training target; the second prediction confidence coefficient is the confidence coefficient that a certain pixel point in a foreground region in the training image predicted by adopting a neural network belongs to the foreground training target, and the second real confidence coefficient represents the confidence coefficient that a pixel point labeled in advance in the training image belongs to the foreground training target;
weighting and summing the first loss function and the second loss function to obtain a target loss function;
adjusting parameters of a neural network according to the target loss function, and training the neural network to obtain a multi-label classification model; the background training target of the training image is provided with a corresponding label, and the foreground training target is also provided with a corresponding label;
outputting a plurality of labels corresponding to the image to be detected as a scene recognition result;
and respectively carrying out image processing corresponding to the scene recognition result on the corresponding area in the image to be detected according to the scene recognition result.
2. The method according to claim 1, characterized in that it comprises, before said acquisition of the image to be detected:
acquiring a multi-label image containing a plurality of scene elements;
training the multi-label classification model using the multi-label image comprising the plurality of scene elements.
3. The method according to claim 1, wherein the performing scene recognition on the image to be detected according to the multi-label classification model to obtain the label corresponding to the image to be detected comprises:
carrying out scene recognition on the image to be detected according to the multi-label classification model to obtain an initial label of the image to be detected and a confidence coefficient corresponding to the initial label;
judging whether the confidence of the initial label is greater than a preset threshold value or not;
and if so, taking the initial label with the confidence coefficient larger than a preset threshold value as a label corresponding to the image to be detected.
4. The method of claim 3, wherein the confidence level for each of the initial labels ranges from [0,1 ].
5. The method according to claim 1, wherein after outputting the label corresponding to the image to be detected as a result of scene recognition, the method comprises:
acquiring position information of the image to be detected during shooting;
and correcting the scene recognition result according to the position information to obtain a corrected scene recognition final result.
6. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image to be detected;
the scene recognition module is used for carrying out scene recognition on the image to be detected according to a multi-label classification model to directly obtain a plurality of labels corresponding to the image to be detected, and the multi-label classification model is obtained according to a multi-label image containing a plurality of scene elements; the multi-label classification model is constructed based on a neural network model, and the specific training method of the multi-label classification model comprises the following steps: inputting a training image containing a background training target and a foreground training target into a neural network to obtain a first loss function reflecting the difference between a first prediction confidence and a first real confidence of each pixel point in a background area in the training image and a second loss function reflecting the difference between a second prediction confidence and a second real confidence of each pixel point in a foreground area in the training image; the first prediction confidence coefficient is the confidence coefficient that a certain pixel point in a background area in a training image predicted by adopting a neural network belongs to a background training target, and the first real confidence coefficient represents the confidence coefficient that a pixel point labeled in advance in the training image belongs to the background training target; the second prediction confidence coefficient is the confidence coefficient that a certain pixel point in a foreground region in the training image predicted by adopting a neural network belongs to the foreground training target, and the second real confidence coefficient represents the confidence coefficient that a pixel point labeled in advance in the training image belongs to the foreground training target; weighting and summing the first loss function and the second loss function to obtain a target loss function; adjusting parameters of a neural network according to the target loss function, and training the neural network to obtain a multi-label classification model; the background training target of the training image is provided with a corresponding label, and the foreground training target is also provided with a corresponding label;
the output module is used for outputting a plurality of labels corresponding to the image to be detected as a scene identification result;
and the image processing module is used for respectively carrying out image processing corresponding to the scene recognition result on the corresponding area in the image to be detected according to the scene recognition result.
7. The apparatus of claim 6, further comprising:
the multi-label image acquisition module is used for acquiring a multi-label image containing various scene elements;
and the multi-label classification model training module is used for training a multi-label classification model by using a multi-label image containing various scene elements.
8. The apparatus of claim 6, wherein the scene recognition module comprises:
the initial label obtaining module is used for carrying out scene recognition on the image to be detected according to the multi-label classification model to obtain an initial label of the image to be detected and a confidence coefficient corresponding to the initial label;
the judging module is used for judging whether the confidence coefficient of the initial label is greater than a preset threshold value or not;
and the image tag generation module is used for taking the initial tag with the confidence coefficient larger than the preset threshold value as the tag corresponding to the image to be detected if the initial tag is the tag corresponding to the image to be detected.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 5.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the image processing method according to any of claims 1 to 5 are implemented by the processor when executing the computer program.
CN201810585679.3A 2018-06-08 2018-06-08 Image processing method and device, storage medium and electronic equipment Expired - Fee Related CN108764208B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810585679.3A CN108764208B (en) 2018-06-08 2018-06-08 Image processing method and device, storage medium and electronic equipment
PCT/CN2019/089914 WO2019233394A1 (en) 2018-06-08 2019-06-04 Image processing method and apparatus, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810585679.3A CN108764208B (en) 2018-06-08 2018-06-08 Image processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108764208A CN108764208A (en) 2018-11-06
CN108764208B true CN108764208B (en) 2021-06-08

Family

ID=64000474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810585679.3A Expired - Fee Related CN108764208B (en) 2018-06-08 2018-06-08 Image processing method and device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN108764208B (en)
WO (1) WO2019233394A1 (en)

Families Citing this family (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764208B (en) * 2018-06-08 2021-06-08 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN109635701B (en) * 2018-12-05 2023-04-18 宽凳(北京)科技有限公司 Lane passing attribute acquisition method, lane passing attribute acquisition device and computer readable storage medium
CN109657517B (en) * 2018-12-21 2021-12-03 深圳智可德科技有限公司 Miniature two-dimensional code identification method and device, readable storage medium and code scanning gun
US20200210788A1 (en) * 2018-12-31 2020-07-02 Robert Bosch Gmbh Determining whether image data is within a predetermined range that image analysis software is configured to analyze
CN109741288B (en) * 2019-01-04 2021-07-13 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN109831629B (en) * 2019-03-14 2021-07-02 Oppo广东移动通信有限公司 Terminal photographing mode adjusting method and device, terminal and storage medium
CN109831628B (en) * 2019-03-14 2021-07-16 Oppo广东移动通信有限公司 Terminal photographing mode adjusting method and device, terminal and storage medium
CN110348291A (en) * 2019-05-28 2019-10-18 华为技术有限公司 A kind of scene recognition method, a kind of scene Recognition device and a kind of electronic equipment
CN110266946B (en) * 2019-06-25 2021-06-25 普联技术有限公司 Photographing effect automatic optimization method and device, storage medium and terminal equipment
CN110796715B (en) * 2019-08-26 2023-11-24 腾讯科技(深圳)有限公司 Electronic map labeling method, device, server and storage medium
CN110704650B (en) * 2019-09-29 2023-04-25 携程计算机技术(上海)有限公司 OTA picture tag identification method, electronic equipment and medium
CN110781834A (en) * 2019-10-28 2020-02-11 上海眼控科技股份有限公司 Traffic abnormality image detection method, device, computer device and storage medium
CN111008145B (en) * 2019-12-19 2023-09-22 中国银行股份有限公司 Test information acquisition method and device
CN111191706A (en) * 2019-12-25 2020-05-22 深圳市赛维网络科技有限公司 Picture identification method, device, equipment and storage medium
CN111125177B (en) * 2019-12-26 2024-01-16 北京奇艺世纪科技有限公司 Method and device for generating data tag, electronic equipment and readable storage medium
CN111128348B (en) * 2019-12-27 2024-03-26 上海联影智能医疗科技有限公司 Medical image processing method, medical image processing device, storage medium and computer equipment
CN111160289A (en) * 2019-12-31 2020-05-15 欧普照明股份有限公司 Method and device for detecting accident of target user and electronic equipment
CN111291800A (en) * 2020-01-21 2020-06-16 青梧桐有限责任公司 House decoration type analysis method and system, electronic device and readable storage medium
CN111212243B (en) * 2020-02-19 2022-05-20 深圳英飞拓智能技术有限公司 Automatic exposure adjusting system for mixed line detection
CN111292331B (en) * 2020-02-23 2023-09-12 华为云计算技术有限公司 Image processing method and device
CN111353549B (en) * 2020-03-10 2023-01-31 创新奇智(重庆)科技有限公司 Image label verification method and device, electronic equipment and storage medium
CN111523390B (en) * 2020-03-25 2023-11-03 杭州易现先进科技有限公司 Image recognition method and augmented reality AR icon recognition system
CN111612034B (en) * 2020-04-15 2024-04-12 中国科学院上海微系统与信息技术研究所 Method and device for determining object recognition model, electronic equipment and storage medium
CN113569593A (en) * 2020-04-28 2021-10-29 京东方科技集团股份有限公司 Intelligent vase system, flower identification and display method and electronic equipment
CN111597921B (en) * 2020-04-28 2024-06-18 深圳市人工智能与机器人研究院 Scene recognition method, device, computer equipment and storage medium
CN111461260B (en) * 2020-04-29 2023-04-18 上海东普信息科技有限公司 Target detection method, device and equipment based on feature fusion and storage medium
CN111709283A (en) * 2020-05-07 2020-09-25 顺丰科技有限公司 Method and device for detecting state of logistics piece
CN113642595B (en) * 2020-05-11 2024-09-17 北京金山数字娱乐科技有限公司 Picture-based information extraction method and device
CN111613212B (en) * 2020-05-13 2023-10-31 携程旅游信息技术(上海)有限公司 Speech recognition method, system, electronic device and storage medium
CN111626353A (en) * 2020-05-26 2020-09-04 Oppo(重庆)智能科技有限公司 Image processing method, terminal and storage medium
CN111709371B (en) * 2020-06-17 2023-12-22 腾讯科技(深圳)有限公司 Classification method, device, server and storage medium based on artificial intelligence
CN112023400B (en) * 2020-07-24 2024-07-26 上海米哈游天命科技有限公司 Altitude map generation method, device, equipment and storage medium
CN111915598B (en) * 2020-08-07 2023-10-13 温州医科大学 Medical image processing method and device based on deep learning
CN114118114A (en) * 2020-08-26 2022-03-01 顺丰科技有限公司 Image detection method, device and storage medium thereof
CN111985449A (en) * 2020-09-03 2020-11-24 深圳壹账通智能科技有限公司 Rescue scene image identification method, device, equipment and computer medium
CN112163110B (en) * 2020-09-27 2023-01-03 Oppo(重庆)智能科技有限公司 Image classification method and device, electronic equipment and computer-readable storage medium
CN112329725B (en) * 2020-11-27 2022-03-25 腾讯科技(深圳)有限公司 Method, device and equipment for identifying elements of road scene and storage medium
CN114647876A (en) * 2020-12-18 2022-06-21 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium
CN112651332B (en) * 2020-12-24 2024-08-02 携程旅游信息技术(上海)有限公司 Scene facility identification method, system, equipment and storage medium based on photo library
CN112579587B (en) * 2020-12-29 2024-07-02 纽扣互联(北京)科技有限公司 Data cleaning method and device, equipment and storage medium
CN112686316A (en) * 2020-12-30 2021-04-20 上海掌门科技有限公司 Method and equipment for determining label
CN113065513B (en) * 2021-01-27 2024-07-09 武汉星巡智能科技有限公司 Optimization method, device and equipment for self-training confidence threshold of intelligent camera
CN112906811B (en) * 2021-03-09 2023-04-18 西安电子科技大学 Automatic classification method for images of engineering vehicle-mounted equipment based on Internet of things architecture
CN112926158B (en) * 2021-03-16 2023-07-14 上海设序科技有限公司 General design method based on parameter fine adjustment in industrial machinery design scene
CN113177498B (en) * 2021-05-10 2022-08-09 清华大学 Image identification method and device based on object real size and object characteristics
CN113329173A (en) * 2021-05-19 2021-08-31 Tcl通讯(宁波)有限公司 Image optimization method and device, storage medium and terminal equipment
CN113221800A (en) * 2021-05-24 2021-08-06 珠海大横琴科技发展有限公司 Monitoring and judging method and system for target to be detected
CN113222058B (en) * 2021-05-28 2024-05-10 芯算一体(深圳)科技有限公司 Image classification method, device, electronic equipment and storage medium
CN113222055B (en) * 2021-05-28 2023-01-10 新疆爱华盈通信息技术有限公司 Image classification method and device, electronic equipment and storage medium
CN113065615A (en) * 2021-06-02 2021-07-02 南京甄视智能科技有限公司 Scenario-based edge analysis algorithm issuing method and device and storage medium
CN113554625B (en) * 2021-07-26 2024-10-18 中华全国供销合作总社济南果品研究院 Method and system for detecting water content of fruits and vegetables in fruit and vegetable drying process
CN113628100B (en) * 2021-08-10 2024-07-02 Oppo广东移动通信有限公司 Video enhancement method, device, terminal and storage medium
CN114049420B (en) * 2021-10-29 2022-10-21 马上消费金融股份有限公司 Model training method, image rendering method, device and electronic equipment
CN114155465A (en) * 2021-11-30 2022-03-08 哈尔滨工业大学(深圳) Multi-scene flame detection method and device and storage medium
CN114255381B (en) * 2021-12-23 2023-05-12 北京瑞莱智慧科技有限公司 Training method of image recognition model, image recognition method, device and medium
CN114547361A (en) * 2022-02-24 2022-05-27 特赞(上海)信息科技有限公司 Automatic labeling method and device for commodity materials and storage medium
CN115100419B (en) * 2022-07-20 2023-02-21 中国科学院自动化研究所 Target detection method and device, electronic equipment and storage medium
CN114998357B (en) * 2022-08-08 2022-11-15 长春摩诺维智能光电科技有限公司 Industrial detection method, system, terminal and medium based on multi-information analysis
CN116310665B (en) * 2023-05-17 2023-08-15 济南博观智能科技有限公司 Image environment analysis method, device and medium
CN117671497B (en) * 2023-12-04 2024-05-28 广东筠诚建筑科技有限公司 Engineering construction waste classification method and device based on digital images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845549A (en) * 2017-01-22 2017-06-13 珠海习悦信息技术有限公司 A kind of method and device of the scene based on multi-task learning and target identification
CN106951911A (en) * 2017-02-13 2017-07-14 北京飞搜科技有限公司 A kind of quick multi-tag picture retrieval system and implementation method
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
CN108052966A (en) * 2017-12-08 2018-05-18 重庆邮电大学 Remote sensing images scene based on convolutional neural networks automatically extracts and sorting technique
CN108090497A (en) * 2017-12-28 2018-05-29 广东欧珀移动通信有限公司 Video classification methods, device, storage medium and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764208B (en) * 2018-06-08 2021-06-08 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845549A (en) * 2017-01-22 2017-06-13 珠海习悦信息技术有限公司 A kind of method and device of the scene based on multi-task learning and target identification
CN106951911A (en) * 2017-02-13 2017-07-14 北京飞搜科技有限公司 A kind of quick multi-tag picture retrieval system and implementation method
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
CN108052966A (en) * 2017-12-08 2018-05-18 重庆邮电大学 Remote sensing images scene based on convolutional neural networks automatically extracts and sorting technique
CN108090497A (en) * 2017-12-28 2018-05-29 广东欧珀移动通信有限公司 Video classification methods, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2019233394A1 (en) 2019-12-12
CN108764208A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108764208B (en) Image processing method and device, storage medium and electronic equipment
CN108921040A (en) Image processing method and device, storage medium, electronic equipment
CN108777815B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN108805103B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108764370B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
US11138478B2 (en) Method and apparatus for training, classification model, mobile terminal, and readable storage medium
CN108810418B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108921161B (en) Model training method and device, electronic equipment and computer readable storage medium
CN108810413B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108804658B (en) Image processing method and device, storage medium and electronic equipment
CN110572573B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN107833197B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108961302B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108805198B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110536068B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110580487A (en) Neural network training method, neural network construction method, image processing method and device
CN108805265B (en) Neural network model processing method and device, image processing method and mobile terminal
CN108875619B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN108765033B (en) Advertisement information pushing method and device, storage medium and electronic equipment
CN108897786B (en) Recommendation method and device of application program, storage medium and mobile terminal
CN109712177B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108848306B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110248101B (en) Focusing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210608