CN108021669B - Image classification method and device, electronic equipment and computer-readable storage medium - Google Patents

Image classification method and device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN108021669B
CN108021669B CN201711270284.6A CN201711270284A CN108021669B CN 108021669 B CN108021669 B CN 108021669B CN 201711270284 A CN201711270284 A CN 201711270284A CN 108021669 B CN108021669 B CN 108021669B
Authority
CN
China
Prior art keywords
face
level
image
level face
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711270284.6A
Other languages
Chinese (zh)
Other versions
CN108021669A (en
Inventor
陈德银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711270284.6A priority Critical patent/CN108021669B/en
Publication of CN108021669A publication Critical patent/CN108021669A/en
Application granted granted Critical
Publication of CN108021669B publication Critical patent/CN108021669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image classification method and device, electronic equipment and a computer readable storage medium, wherein a first-level face meeting preset conditions is obtained from an image according to the area of the face, and the face identification is carried out on the first-level face to obtain the face type corresponding to the first-level face. And then acquiring a second-level face in the image, and carrying out face recognition on the second-level face to obtain a face class corresponding to the second-level face, wherein the second-level face is the face except the first-level face in the image. And finally, dividing the image into corresponding face classes according to the face class corresponding to the first-level face and the face class corresponding to the second-level face. The obtained face categories are accurate, a large number of unimportant face categories cannot appear, the effectiveness of classification results is improved, and the actual requirements of users are met. And finally, dividing the image into corresponding human face categories.

Description

Image classification method and device, electronic equipment and computer-readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image classification method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the popularization of electronic devices and the rapid development of mobile internet, users of electronic devices are increasingly using more and more. The album function has become one of the common applications of the electronic device, and belongs to an application with a very high frequency of use for the user. A great number of images are stored in an album of an electronic device, and a conventional album of an electronic device has a function of providing various image browsing and classification functions, for example, classification of personal images according to human face features is a popular image display mode at present. However, the conventional image classification method generates more useless classifications, which causes resource waste and does not meet the actual requirements of users.
Disclosure of Invention
The embodiment of the application provides an image classification method and device, electronic equipment and a computer-readable storage medium, which can improve the effectiveness of classification results and better meet the actual requirements of users.
An image classification method, comprising:
acquiring a first-level face meeting preset conditions from an image according to the size of the face area, and performing face recognition on the first-level face to obtain a face category corresponding to the first-level face;
acquiring a second-level face in the image, and performing face recognition on the second-level face to obtain a face category corresponding to the second-level face, wherein the second-level face is a face except the first-level face in the image;
and dividing the image into corresponding face categories according to the face category corresponding to the first-level face and the face category corresponding to the second-level face.
An image classification apparatus, the apparatus comprising:
the first-level face recognition module is used for acquiring a first-level face meeting preset conditions from an image according to the size of the face area, and performing face recognition on the first-level face to obtain a face type corresponding to the first-level face;
the second-level face recognition module is used for acquiring a second-level face in the image and carrying out face recognition on the second-level face to obtain a face type corresponding to the second-level face, wherein the second-level face is a face except the first-level face in the image;
and the image classification module is used for classifying the image into corresponding face classes according to the face class corresponding to the first-level face and the face class corresponding to the second-level face.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program, the instructions, when executed by the processor, causing the processor to perform the steps of the image classification method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image classification method as described above.
According to the image classification method and device, the electronic equipment and the computer readable storage medium, firstly, the first-level face meeting the preset conditions is obtained from the image according to the area of the face, the face recognition is carried out on the first-level face, and the face category corresponding to the first-level face is obtained. And then acquiring a second-level face in the image, and carrying out face recognition on the second-level face to obtain a face class corresponding to the second-level face, wherein the second-level face is the face except the first-level face in the image. And finally, dividing the image into corresponding face classes according to the face class corresponding to the first-level face and the face class corresponding to the second-level face. The method comprises the steps of extracting faces in a grading mode, wherein when a first-level face is extracted, the first-level face is sequentially extracted according to the sequence of the face areas from large to small, and the extracted first-level face needs to meet preset conditions, so that the conditions for extracting the first-level face are very strict, face recognition is carried out on the first-level face, and the face category corresponding to the first-level face is obtained. And then, continuously acquiring a second-level face from the image, and then carrying out face recognition on the second-level face to obtain a face class corresponding to the second-level face. The obtained face categories are accurate, a large number of unimportant face categories cannot appear, the effectiveness of classification results is improved, and the actual requirements of users are met. And finally, dividing the image into corresponding human face categories.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1A is a diagram illustrating an exemplary embodiment of a method for classifying images;
FIG. 1B is a diagram illustrating an internal structure of an electronic device according to an embodiment;
FIG. 2 is a flow diagram of a method of image classification in one embodiment;
FIG. 3 is a flowchart illustrating a method for obtaining a first-level face from an image according to a sequence of a face area from a large face to a small face area in one embodiment;
FIG. 4 is a flowchart of a method for obtaining face classes corresponding to a second level of faces in one embodiment;
FIG. 5 is a flow diagram of a method for obtaining face classes corresponding to a second level of faces in one embodiment;
FIG. 6 is a flowchart of a method for obtaining a face type corresponding to a second level face and a third level face in one embodiment;
FIG. 7 is a flowchart of a method for obtaining face classes corresponding to a third level of faces in one embodiment;
FIG. 8 is a schematic diagram showing the structure of an image classification apparatus according to an embodiment;
FIG. 9 is a schematic structural diagram of the first-level face recognition module in FIG. 8;
FIG. 10 is a schematic diagram of the second-level face recognition module shown in FIG. 8;
FIG. 11 is a schematic diagram of a second level face recognition module shown in FIG. 8;
fig. 12 is a block diagram of a partial structure of a cellular phone related to an electronic device provided in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1A is a diagram illustrating an application scenario of the image processing method in an embodiment, as shown in fig. 1A, the application environment includes an electronic device 110 and a server 120. The terminal 110 and the server 120 are connected via a network. The electronic device 110 stores an image, and the image may be stored in a Memory of the electronic device 110, or may be stored in an SD (Secure Digital Memory Card) Card (Secure Digital Card) built in the electronic device 110. The electronic device 110 may obtain a first-level face meeting a preset condition from the image according to the size of the face area, and perform face recognition on the first-level face to obtain a face category corresponding to the first-level face. And acquiring a second-level face in the image, and performing face recognition on the second-level face to obtain a face class corresponding to the second-level face, wherein the second-level face is a face except the first-level face in the image. And dividing the image into corresponding face classes according to the face class corresponding to the first-level face and the face class corresponding to the second-level face. Of course, the image classification method may also be implemented by the electronic device 110 initiating a request for image classification to the server 120, completing image classification on the server 120, and sending the result of image classification to the electronic device 110 by the server 120.
Fig. 1B is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1B, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and the memory stores at least one computer program which can be executed by the processor to realize the image classification method suitable for the electronic device provided in the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing an image classification method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
In one embodiment, as shown in fig. 2, an image classification method is provided, which is described by taking the method as an example applied to the electronic device in fig. 1A, and includes:
step 202, obtaining a first-level face meeting preset conditions from the image according to the size of the face area, and performing face recognition on the first-level face to obtain a face category corresponding to the first-level face.
The electronic equipment acquires an image to be classified from a local or cloud photo album, and then acquires a first-level face meeting preset conditions from the image according to the size of the face area in the image to be classified. The images to be classified include single-person images and multi-person group images. Only one face is in the single-person image, and the multi-person group photo image comprises a plurality of faces. And judging whether the human face meets preset conditions or not from big to small in the multi-person group photo image according to the area size sequence of the human face in the image, wherein if so, the human face meeting the conditions is the first-level human face. The first level face refers to the face of the core person in each image. When the first-level face is extracted for each image for the first time, only one face is extracted from each image to serve as the first-level face. And carrying out face recognition on the first-level face of each acquired image according to a face recognition algorithm to obtain the face category corresponding to the first-level face of each image. The face type refers to the identity of a person corresponding to the first-level face recognition result. Specifically, after the face recognition is performed on the first-level face, it is obtained that the identity corresponding to the first-level face is three, and that the face category corresponding to the first-level face is three.
And 204, acquiring a second-level face in the image, and performing face recognition on the second-level face to obtain a face class corresponding to the second-level face, wherein the second-level face is a face except the first-level face in the image.
And after the first-level face of each image is obtained from the image to be classified, the second-level face in the image is obtained. The second-level face is the face except the first-level face in the image. There is no face other than the first-level face for the single-person image, but there is a face other than the first-level face for the multi-person group image.
And processing the second-level face in the image, respectively calculating the definition of the second-level face in the image, and judging whether the definition of the second-level face meets the preset condition. And if so, carrying out face recognition on the second-level face meeting the definition preset condition to obtain the face type corresponding to the second-level face. If the definition of the second-level face in the image does not reach the definition threshold value capable of identifying the identity corresponding to the face, the first-level face similar to the second-level face is continuously searched in other images in the album, and if the first-level face similar to the second-level face is searched, the second-level face is classified into the face category corresponding to the first-level face. If the first-level face similar to the second-level face is not found, the face similar to the second-level face is found in other images in the album, and the frequency of the face similar to the second-level face is calculated. And if the frequency reaches the set threshold value, carrying out face recognition on the second-level face to obtain the face type corresponding to the second-level face, and if the frequency does not reach the set threshold value, identifying the second-level face as a third-level face.
And step 206, dividing the image into corresponding face classes according to the face class corresponding to the first-level face and the face class corresponding to the second-level face.
The first-level face and the second-level face in the image are subjected to face classification through the steps, and the classified faces are classified into corresponding face classes. Then, the image is divided into corresponding face classes according to the face classes into which the first-level face and the second-level face in the image can be divided. Specifically, if the first-level face and the second-level face of an image are classified into 3 categories, the image will appear in the 3 categories at the same time. For example, one image includes 3 faces, respectively zhang san, lie si and wang wu. And if the first-level face of the image is three, obtaining the face type corresponding to the first-level face, wherein the face type corresponding to the first-level face is three. And (3) assuming that the definition of the second-level face except the first-level face meets the preset condition, face recognition is also carried out and classified into corresponding face categories, namely the classification of Liqun and Wangwen respectively. Then after the image is classified, it will eventually be displayed in the set of three pictures, the set of lie four pictures, and certainly in the set of king five pictures. Because the faces corresponding to the 3 identities all meet certain conditions and are subjected to face recognition, the faces are classified into corresponding face classes.
In this embodiment, first, a first-level face meeting a preset condition is obtained from an image according to the size of the face area, and face recognition is performed on the first-level face to obtain a face category corresponding to the first-level face. And then acquiring a second-level face in the image, and carrying out face recognition on the second-level face to obtain a face class corresponding to the second-level face, wherein the second-level face is the face except the first-level face in the image. And finally, dividing the image into corresponding face classes according to the face class corresponding to the first-level face and the face class corresponding to the second-level face. The method comprises the steps of extracting faces in a grading mode, wherein when a first-level face is extracted, the first-level face is sequentially extracted according to the sequence of the face areas from large to small, and the extracted first-level face needs to meet preset conditions, so that the conditions for extracting the first-level face are very strict, face recognition is carried out on the first-level face, and the face category corresponding to the first-level face is obtained. And then, continuously acquiring a second-level face from the image, and then carrying out face recognition on the second-level face to obtain a face class corresponding to the second-level face. The obtained face categories are accurate, a large number of unimportant face categories cannot appear, the effectiveness of classification results is improved, and the actual requirements of users are met. And finally, dividing the image into corresponding human face categories.
In one embodiment, as shown in fig. 3, the obtaining a first-level face from an image according to the size of the face area, where the first-level face meets a preset condition, includes:
step 302, obtaining a face with the largest face area from the multi-person movie image, and judging whether the face meets a preset condition.
And acquiring the face with the largest face area from the multi-person synthetic image, and judging whether the face meets the preset condition. The preset conditions include, but are not limited to, the following: judging the shooting angle and the focus of the face with the largest face area, judging whether the shooting angle is a front face, judging whether the shooting focus is the face with the largest face area, and judging whether the face with the largest face area is closest to the lens.
And 304, if so, taking the face with the largest face area as the first-level face of the multi-person group photo image.
And step 306, if not, continuously acquiring the face with the face area of the second order from the multi-person synthetic image, judging whether the face meets the preset condition, and repeatedly executing until the face reaches the minimum threshold or the first-level face is acquired.
If the judgment result is that the preset conditions are met at the same time, the face with the largest face area is the face corresponding to the core character, and then the face with the largest face area is used as the first-level face of the multi-person group photo image and is marked as the first-level face.
If the judgment result is that the preset condition is not met, the face with the largest face area is not the face corresponding to the core figure, namely the first-level face, the face with the second-order face area is continuously obtained from the multi-person group photo image, whether the face meets the preset condition or not is judged, if the judgment result is that the preset condition is met simultaneously, the face with the second-order face area is the first-level face, and then the face with the second-order face area is used as the first-level face of the multi-person group photo image and marked as the first-level face.
If the face with the second-order face area does not meet the preset condition, the face with the second-order face area is not the first-order face. And continuously acquiring the face with the face area again from the multi-person synthetic image, and repeatedly executing until the face reaches a minimum threshold or the first-level face is acquired. Namely, a minimum threshold of the face area is preset, and if the face meeting the preset condition is not found in the faces with the face area larger than the minimum threshold, the first-level face acquisition of the image is stopped.
In the embodiment of the application, compared with a single image, the first-level face is difficult to determine from a multi-person synthetic image, so that whether the first-level face can be used as the first-level face is sequentially judged according to the sequence of the face areas from large to small, and the first-level face cannot be omitted. And preset conditions are set, and only the face meeting the preset conditions can become a first-level face, so that the obtained first-level face result is more accurate.
In an embodiment, the step 302 of obtaining a face with a largest face area from the multi-person movie image and determining whether the face meets a preset condition includes: and acquiring the face with the largest face area from the multi-person synthetic image, and judging whether the angle and the focus of the face meet preset conditions.
In the embodiment of the application, the face with the largest face area is obtained from the multi-person photographic image, and whether the face meets the preset condition is judged. The preset conditions include, but are not limited to, the following: judging the shooting angle and the focus of the face with the largest face area, judging whether the shooting angle is a front face, judging whether the shooting focus is the face with the largest face area, and judging whether the face with the largest face area is closest to the lens. Only the face meeting the preset conditions can become the first-level face, so that the obtained first-level face result is more accurate.
In an embodiment, as shown in fig. 4, the step 204 obtains a second-level face in the image, and performs face recognition on the second-level face to obtain a face class corresponding to the second-level face, including:
step 402, acquiring a second-level face in the image.
Because the second-level face is the face except the first-level face in the image, after the first-level face of each image is obtained from the multi-person photo image, the face except the first-level face in each image is obtained, namely the second-level face in the image is obtained.
Step 404, respectively calculating the definition of the second-level face in the image.
And step 406, judging whether the definition of the second-level face meets a preset condition.
And step 408, if yes, performing face recognition on the second-level face meeting the definition preset condition to obtain the face type corresponding to the second-level face.
And respectively calculating the definition of the second-level face obtained from the multi-person movie image, and judging whether the definition of the second-level face meets the preset condition. The preset condition may be that a definition threshold value of the identity corresponding to the face can be reached, for example, the identity of a person corresponding to the face can be identified for a second-level face, that is, the definition meets the preset condition. Of course, the preset condition may also be other specific definition thresholds, and exceeding the definition threshold means that the definition meets the preset condition. For the single image, the first-level face does not need to be extracted, the face in the single image can be regarded as the second-level face, and the second-level face is processed by adopting a method for processing the second-level face, so that the definition of the face in the single image is directly judged at first.
And judging whether the definition of the second-level face meets a preset condition, if so, carrying out face recognition on the second-level face meeting the definition preset condition to obtain the face type corresponding to the second-level face.
In the embodiment of the application, the condition of definition is adopted for the second-level face to judge whether face recognition is needed and classification is carried out. Because the definition condition is adopted to screen the second-level human face, only the second-level human face with the definition meeting the preset condition can be identified and classified, the screening condition is strict, the classes of the screened human face are fewer, the human face classification of the secondary human face is greatly reduced, the effectiveness of the classification result is improved, and the actual requirements of users are met.
In one embodiment, as shown in fig. 5, the method further comprises:
and step 410, when the definition of the second-level face does not meet the preset condition, searching a first-level face similar to the second-level face in other images in the album.
And if the definition of the second-level face in the image does not reach the preset condition, continuously searching the first-level face similar to the second-level face in other images in the album. For example, if a first-level face extracted from a single-person image or a multi-person group image is similar to a second-level face in the image, the second-level face in the image may be classified into a face category corresponding to the first-level face. Similar concepts can be defined as the similarity of the face features reaching a set threshold. For example, similarity is defined as when the similarity of the face features reaches 80%. Of course, other reasonable values may be set.
In step 412, if a first-level face similar to a second-level face is found, the second-level face is classified into a face class corresponding to the first-level face.
If a first-level face similar to a second-level face is found in other images in the album, for example, a first-level face is found, and the similarity between the second-level face and the first-level face reaches 90%, then the first-level face similar to the second-level face is found, and the second-level face is classified into the face category corresponding to the first-level face.
In this embodiment, the second-level face of the first-level face similar to the second-level face can be found from other images in the album, and is classified into a category corresponding to the first-level face. The first-level human face is used as a classification standard, so that a plurality of other human face classifications can not be generated, and the accuracy of the human face classification of the image is improved.
In one embodiment, as shown in fig. 6, after finding the first level face similar to the second level face in the other images in the album, the method includes:
step 602, if the first-level face similar to the second-level face is not found, finding the face similar to the second-level face in other images in the album.
If the first-level face similar to the second-level face is not found in other images in the album, the face similar to the second-level face is found in other images in the album, and the face may not be the first-level face. Similar concepts can be defined as the similarity of the face features reaching a set threshold. For example, similarity is defined as when the similarity of the face features reaches 80%. Of course, other reasonable values may be set. If the face with the similarity degree reaching 80% with the second level face is searched in all the faces in other images in the photo album.
And step 604, calculating the frequency of the appearance of the face similar to the second-level face.
And calculating the number of times of searching all the faces in other images in the photo album for the face with the similarity reaching 80 percent with the second-level face.
And 606, if the times reach a set threshold value, performing face recognition on the second-level face to obtain a face type corresponding to the second-level face.
If the number of times of appearance of the similar faces reaches a set threshold value, selecting a clearest face from the similar faces, and performing face recognition on the clearest face to obtain an identity corresponding to the clearest face, wherein the identity is a face type corresponding to the similar faces of the class, namely a face type corresponding to the second-level face. The preset number of times may be set to 5 times, but may be set to other reasonable numbers, such as 3 times, 4 times, 6 times, 10 times, etc., in other embodiments.
In step 608, if the number of times does not reach the set threshold, the second-level face is identified as the third-level face.
And if the number of times of the appearance of the similar face does not reach the set threshold value, temporarily identifying the second-level face as a third-level face. The third level face contains all faces that are not classified into face classes.
In this embodiment, a second-level face of a first-level face similar to a second-level face cannot be found in other images in the album, but a similar face can be found in the album, the number of times of appearance of the similar face reaches a set threshold value, and the clearest face is extracted from the similar faces to perform face recognition and obtain a face category. The second-level face which meets the above conditions is also classified into a third-level face, and the second-level face which does not meet the above two conditions for face classification is classified into the third-level face for subsequent classification of the third-level face again. Through the screening of one layer of the second-level face, the second-level faces meeting the conditions are classified, and therefore accuracy and integrity of classification results are guaranteed.
In one embodiment, as shown in fig. 7, after identifying the second level face as the third level face, the method includes:
and step 702, calculating the times of the third-level face and the first-level face belonging to different face categories appearing on the same image in the album.
After the classification, the third-level faces are all faces which are not classified into face categories. And respectively calculating the times of the appearance of each third-level face and the first-level faces belonging to different face categories on the same image in all the images in the album, namely calculating the times of the group photo of the third-level faces with the same identity and the first-level faces with different identities. For example, the number of times that the third-level face with the identity a is combined with the first-level faces with different identities is calculated, specifically, it is assumed that there are 10 first-level faces in the album, such as zhang three, li four, and wang five. Then the number of times the third level face with identity a is merged with any of the 10 first level faces is calculated. One image in the photo album is the group photo of the person A and the persons Zhang III, Li IV and Wang V, so that the times of group photo of the third-level face with the identity of A and the first-level faces with different identities are 3 times. If an image is also in the photo album, the photo is the photo of A and three and two people, and the image is the photo of A and five and four people. Then the number of times that the third level face with identity a is combined with the first level face with different identity is 5.
Step 704, determine whether the number of times reaches a set threshold.
The threshold may be set to be 5 times of appearance of the third-level face and the first-level face belonging to different face categories on the same image in the album, or may be set to be other reasonable times in other embodiments, such as 3 times, 4 times, 6 times, 7 times, 8 times, 9 times, 10 times, and so on.
And 706, if yes, performing face recognition on the third-level face to obtain a face type corresponding to the third-level face.
And step 708, if not, keeping the level as the third-level face.
And judging whether the frequency of the third-level face and the frequency of the first-level face belonging to different face classes appearing on the same image reaches a set threshold (for example, 5 times), and if the frequency reaches 5 times, performing face recognition on the third-level face to obtain the face class corresponding to the third-level face.
In this embodiment, if the number of times that the third-level face with the identity a and the first-level faces with different identities are combined together exceeds a set threshold, face recognition is performed on the third-level face with the identity a, and a face category corresponding to the third-level face is obtained. If the number of times is not up to 5, the road man face is kept as the suspected road man face. And when newly adding an image in the album next time, recalculating the times of the third-level face appearing on the same image as the first-level face belonging to different face categories. Therefore, the third-level face is classified, so that the third-level face also has an opportunity to be subjected to face classification to obtain a corresponding face class. Therefore, the face recognition of the first-level face and the second-level face in each image is effectively avoided, the completeness and integrity of the scheme are guaranteed, and the accuracy and effectiveness of the classification result of the final classification of the images are improved.
In an embodiment, an image classification method is further provided, which is described by taking the application of the method to the electronic device in fig. 1A as an example, and specifically includes:
(1) and judging that the images to be classified have several human faces.
(2) If only one face exists in the picture, jumping to the step (3) to start execution, if a plurality of faces exist in the picture, namely the multi-person group photo picture, acquiring the face with the largest face area from the multi-person group photo picture, and judging whether the angle and the focus of the face meet preset conditions or not; and if so, taking the face with the largest face area as the first-level face of the multi-person group photo image. If not, continuously acquiring the face with the face area of the second order from the multi-person synthetic image, judging whether the face meets the preset condition, and repeatedly executing until the face area reaches the minimum threshold or the first-level face is acquired.
(3) Acquiring a second-level face in the image; respectively calculating the definition of a second-level face in the image; judging whether the definition of the second-level face meets a preset condition or not; and if so, carrying out face recognition on the second-level face meeting the definition preset condition to obtain the face type corresponding to the second-level face.
(4) When the definition of the second-level face does not meet the preset condition, searching a first-level face similar to the second-level face in other images in the album; and if a first-level face similar to the second-level face is found, dividing the second-level face into the face classes corresponding to the first-level faces.
(5) If the first-level face similar to the second-level face is not found, searching faces similar to the second-level face in other images in the album; calculating the occurrence times of the human faces similar to the second-level human faces; if the times reach a set threshold value, performing face recognition on the second-level face to obtain a face type corresponding to the second-level face; and if the times do not reach the set threshold value, identifying the second-level face as a third-level face.
(6) Calculating the times of the appearance of the third-level face and the first-level faces belonging to different face categories in the same image in the album; judging whether the times reach a set threshold value or not; if so, carrying out face recognition on the third-level face to obtain a face type corresponding to the third-level face; and if not, keeping the face as the third-level face.
(7) And dividing the image into corresponding face classes according to the face class corresponding to the first-level face, the face class corresponding to the second-level face and the face class corresponding to the third-level face.
In one embodiment, as shown in fig. 8, there is provided an image classification apparatus 800, the apparatus comprising: a first level face recognition module 802, a second level face recognition module 804, and an image classification module 806. Wherein,
the first-level face recognition module 802 is configured to obtain a first-level face meeting a preset condition from an image according to the size of the face area, and perform face recognition on the first-level face to obtain a face category corresponding to the first-level face.
And the second-level face recognition module 804 is configured to obtain a second-level face in the image, perform face recognition on the second-level face, and obtain a face category corresponding to the second-level face, where the second-level face is a face excluding the first-level face in the image.
The image classification module 806 is configured to classify the image into corresponding face classes according to the face class corresponding to the first-level face and the face class corresponding to the second-level face.
In one embodiment, as shown in FIG. 9, the first level face recognition module 802 includes:
the first-level face judgment module 802a of the multi-person group photo image is configured to obtain a face with a largest face area from the multi-person group photo image, and judge whether the face meets a preset condition.
And the first-level face determining module 802b of the multi-person group photo image is configured to, if the face with the largest face area meets the preset condition, take the face with the largest face area as the first-level face of the multi-person group photo image.
The loop module 802c is configured to, if the face with the largest face area does not meet the preset condition, continue to obtain a face with a next smaller face area from the multi-person movie image, determine whether the face meets the preset condition, and repeat the execution until the face area reaches the minimum threshold or the first-level face is obtained.
In an embodiment, the first-level face determining module 802a of the multi-person group photo image is further configured to obtain a face with a largest face area from the multi-person group photo image, and determine whether an angle and a focus of the face meet preset conditions.
In one embodiment, as shown in FIG. 10, the second level face recognition module 804 includes:
the second-level face obtaining module 8041 is configured to obtain a second-level face in the image.
The sharpness calculation module 8042 is configured to calculate the sharpness of the second-level face in the image respectively.
The judging module 8043 is configured to judge whether the sharpness of the second-level face meets a preset condition.
The face recognition module 8044 is configured to, if the definition of the second-level face meets a preset condition, perform face recognition on the second-level face meeting the preset condition of the definition, and obtain a face type corresponding to the second-level face.
In one embodiment, as shown in fig. 11, the second-level face recognition module 804 further includes:
the similar first-level face searching module 8045 is configured to search a first-level face similar to the second-level face in other images in the album when it is determined that the sharpness of the second-level face does not meet the preset condition.
The second-level face classification module 8046 is configured to, if a first-level face similar to the second-level face is found, classify the second-level face into a face class corresponding to the first-level face.
In one embodiment, the second level face recognition module is further configured to: if the first-level face similar to the second-level face is not found, searching faces similar to the second-level face in other images in the album; calculating the occurrence times of the human faces similar to the second-level human faces; if the times reach a set threshold value, performing face recognition on the second-level face to obtain a face type corresponding to the second-level face; and if the times do not reach the set threshold value, identifying the second-level face as a third-level face.
In one embodiment, the second level face recognition module is further configured to: calculating the times of the appearance of the third-level face and the first-level faces belonging to different face categories in the same image in the album; judging whether the times reach a set threshold value or not; if so, carrying out face recognition on the third-level face to obtain a face type corresponding to the third-level face; and if the times do not reach the set threshold value, keeping the number as the third-level face.
The division of each module in the image classification apparatus is only used for illustration, and in other embodiments, the image classification apparatus may be divided into different modules as needed to complete all or part of the functions of the image classification apparatus.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the above-described image classification method.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, realizes the steps of the image classification method provided by the above embodiments.
The embodiment of the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the steps of the image classification method provided in each of the above embodiments are implemented.
The embodiment of the application also provides the electronic equipment. As shown in fig. 12, for convenience of explanation, only the portions related to the embodiments of the present application are shown, and details of the specific techniques are not disclosed, please refer to the method portion of the embodiments of the present application. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the electronic device as the mobile phone as an example:
fig. 12 is a block diagram of a partial structure of a mobile phone related to an electronic device provided in an embodiment of the present application. Referring to fig. 12, the cellular phone includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, display unit 940, sensor 950, audio circuit 960, wireless fidelity (WiFi) module 970, processor 980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 910 may be used for receiving and transmitting signals during information transmission or communication, and may receive downlink information of a base station and then process the downlink information to the processor 980; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 920 may be used to store software programs and modules, and the processor 980 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 920. The memory 920 may include a program storage area and a data storage area at a first level, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, etc.), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 900. Specifically, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, which may also be referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (e.g., a user operating the touch panel 931 or near the touch panel 931 by using a finger, a stylus, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 931 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), and the like.
The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 940 may include a display panel 941. In one embodiment, the Display panel 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, the touch panel 931 may overlay the display panel 941, and when the touch panel 931 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 980 to determine the type of touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of touch event. Although in fig. 12, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the mobile phone.
Cell phone 900 may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 941 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 941 and/or backlight when the mobile phone is moved to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
Audio circuitry 960, speaker 961 and microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and convert the electrical signal into a sound signal for output by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 960, and then outputs the audio data to the processor 980 for processing, and then the audio data can be transmitted to another mobile phone through the RF circuit 910, or the audio data can be output to the memory 920 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 12 shows WiFi module 970, it is to be understood that it does not belong to the essential components of cell phone 900 and may be omitted as desired.
The processor 980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. In one embodiment, processor 980 may include one or more processing units. In one embodiment, processor 980 may integrate an application processor and a modem processor, wherein the application processor first level is to handle operating systems, user interfaces, applications, and the like; the modem processor first stage is to process the wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
The handset 900 also includes a power supply 990 (e.g., a battery) for supplying power to various components, which may preferably be logically connected to the processor 980 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In one embodiment, the cell phone 900 may also include a camera, a bluetooth module, and the like.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image classification method, comprising:
acquiring a face with the largest face area from a multi-person synthetic image, and judging whether the face meets a preset condition or not; if so, taking the face with the largest face area as a first-level face of the multi-person group photo image; if not, continuously acquiring the face with the face area of the second order from the multi-person group photo image, judging whether the face meets the preset condition, and repeatedly executing until the face area reaches the minimum threshold or the first-level face is acquired; acquiring a face from a single image as a first-level face;
carrying out face recognition on the first-level face to obtain a face category corresponding to the first-level face;
for a multi-person group photo image, acquiring a second-level face meeting a definition preset condition in the image, and performing face recognition on the second-level face to obtain a face type corresponding to the second-level face, wherein the second-level face is a face except for a first-level face in the image;
and dividing the image into corresponding face categories according to the face category corresponding to the first-level face and the face category corresponding to the second-level face.
2. The method according to claim 1, wherein the obtaining a face with a largest face area from the multi-person photographic image and determining whether the face meets a preset condition comprises:
and acquiring the face with the largest face area from the multi-person group photo image, and judging whether the angle and the focus of the face meet preset conditions.
3. The method according to claim 1, wherein for the multi-person group photo image, acquiring a second-level face meeting a preset definition condition in the image, and performing face recognition on the second-level face to obtain a face class corresponding to the second-level face, comprises:
acquiring a second-level face in the image;
respectively calculating the definition of the second-level face in the image;
judging whether the definition of the second-level face meets a preset condition or not;
and if so, carrying out face recognition on the second-level face meeting the definition preset condition to obtain the face type corresponding to the second-level face.
4. The method of claim 3, further comprising:
when the definition of the second-level face does not meet the preset condition, searching a first-level face similar to the second-level face in other images in the album;
and if a first-level face similar to the second-level face is found, dividing the second-level face into the face type corresponding to the first-level face.
5. The method of claim 4, wherein after finding the first level face similar to the second level face in the other images in the album, comprising:
if the first-level face similar to the second-level face is not found, searching faces similar to the second-level face in other images in the album;
calculating the occurrence times of the human faces similar to the second-level human faces;
if the times reach a set threshold value, carrying out face recognition on the second-level face to obtain a face type corresponding to the second-level face;
and if the times do not reach a set threshold value, identifying the second-level face as a third-level face.
6. The method of claim 5, wherein after said identifying said second level face as a third level face, comprising:
calculating the times of the third-level face and the first-level faces belonging to different face categories appearing on the same image in the photo album;
judging whether the times reach a set threshold value or not;
if so, carrying out face recognition on the third-level face to obtain a face class corresponding to the third-level face;
and if not, keeping the face as the third-level face.
7. The method according to claim 1, wherein the obtaining a face with a largest face area from the multi-person photographic image and determining whether the face meets a preset condition comprises:
and acquiring the face with the largest face area from the multi-person group photo image, and judging whether the face with the largest face area is closest to the lens.
8. An image classification apparatus, characterized in that the apparatus comprises:
the first-stage face recognition module is used for acquiring a face with the largest face area from the multi-person synthetic image and judging whether the face meets a preset condition or not; if so, taking the face with the largest face area as a first-level face of the multi-person group photo image; if not, continuously acquiring the face with the face area of the second order from the multi-person group photo image, judging whether the face meets the preset condition, and repeatedly executing until the face area reaches the minimum threshold or the first-level face is acquired; acquiring a face from a single image as a first-level face; acquiring a first-level face meeting preset conditions from an image according to the sequence of the face areas from large to small, and performing face recognition on the first-level face to obtain a face type corresponding to the first-level face;
the second-level face recognition module is used for acquiring a second-level face meeting the definition preset condition in the image for the multi-person group photo image, and performing face recognition on the second-level face to obtain a face type corresponding to the second-level face, wherein the second-level face is a face except the first-level face in the image;
and the image classification module is used for classifying the image into corresponding face classes according to the face class corresponding to the first-level face and the face class corresponding to the second-level face.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the image classification method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image classification method according to any one of claims 1 to 7.
CN201711270284.6A 2017-12-05 2017-12-05 Image classification method and device, electronic equipment and computer-readable storage medium Active CN108021669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711270284.6A CN108021669B (en) 2017-12-05 2017-12-05 Image classification method and device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711270284.6A CN108021669B (en) 2017-12-05 2017-12-05 Image classification method and device, electronic equipment and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108021669A CN108021669A (en) 2018-05-11
CN108021669B true CN108021669B (en) 2021-03-12

Family

ID=62078464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711270284.6A Active CN108021669B (en) 2017-12-05 2017-12-05 Image classification method and device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN108021669B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932724B (en) * 2018-05-31 2020-06-19 杭州晓图科技有限公司 Automatic system auditing method based on multi-person collaborative image annotation
CN110414433A (en) * 2019-07-29 2019-11-05 腾讯科技(深圳)有限公司 Image processing method, device, storage medium and computer equipment
CN110968719B (en) * 2019-11-25 2023-04-18 浙江大华技术股份有限公司 Face clustering method and device
CN113129917A (en) * 2020-01-15 2021-07-16 荣耀终端有限公司 Speech processing method based on scene recognition, and apparatus, medium, and system thereof

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8155397B2 (en) * 2007-09-26 2012-04-10 DigitalOptics Corporation Europe Limited Face tracking in a camera processor
JP4264660B2 (en) * 2006-06-09 2009-05-20 ソニー株式会社 IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND COMPUTER PROGRAM
US8031914B2 (en) * 2006-10-11 2011-10-04 Hewlett-Packard Development Company, L.P. Face-based image clustering
JP4254873B2 (en) * 2007-02-16 2009-04-15 ソニー株式会社 Image processing apparatus, image processing method, imaging apparatus, and computer program
CN101339607B (en) * 2008-08-15 2012-08-01 北京中星微电子有限公司 Human face recognition method and system, human face recognition model training method and system
AU2008264173A1 (en) * 2008-12-23 2010-07-08 Canon Kabushiki Kaisha Splitting a single video stream into multiple viewports based on face detection
KR101626004B1 (en) * 2009-12-07 2016-05-31 삼성전자주식회사 Method and apparatus for selective support of the RAW format in digital imaging processor
CN102033958B (en) * 2010-12-28 2013-04-17 Tcl商用信息科技(惠州)股份有限公司 Photo sort management system and method
US8917913B2 (en) * 2011-09-22 2014-12-23 International Business Machines Corporation Searching with face recognition and social networking profiles
CN103064864A (en) * 2011-10-19 2013-04-24 致伸科技股份有限公司 Photo sharing system with face recognition function
US20160078285A1 (en) * 2012-05-23 2016-03-17 Roshni Malani System and Method for Displaying an Object in a Tagged Image
JP6018029B2 (en) * 2013-09-26 2016-11-02 富士フイルム株式会社 Apparatus for determining main face image of captured image, control method thereof and control program thereof
US10121060B2 (en) * 2014-02-13 2018-11-06 Oath Inc. Automatic group formation and group detection through media recognition
JP6009481B2 (en) * 2014-03-11 2016-10-19 富士フイルム株式会社 Image processing apparatus, important person determination method, image layout method, program, and recording medium
CN105824875B (en) * 2016-02-26 2019-08-20 维沃移动通信有限公司 A kind of photo be shared method and mobile terminal
CN106599837A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Face identification method and device based on multi-image input

Also Published As

Publication number Publication date
CN108021669A (en) 2018-05-11

Similar Documents

Publication Publication Date Title
CN107729815B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN106844484B (en) Information searching method and device and mobile terminal
CN107679559B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN108334539B (en) Object recommendation method, mobile terminal and computer-readable storage medium
CN107729889B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108021669B (en) Image classification method and device, electronic equipment and computer-readable storage medium
CN107124555B (en) Method and device for controlling focusing, computer equipment and computer readable storage medium
CN107679560B (en) Data transmission method and device, mobile terminal and computer readable storage medium
CN107995422B (en) Image shooting method and device, computer equipment and computer readable storage medium
CN104239535A (en) Method and system for matching pictures with characters, server and terminal
CN109325518B (en) Image classification method and device, electronic equipment and computer-readable storage medium
CN107944414B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107992822B (en) Image processing method and apparatus, computer device, computer-readable storage medium
WO2019105457A1 (en) Image processing method, computer device and computer readable storage medium
CN109726726B (en) Event detection method and device in video
CN109508398B (en) Photo classification method and terminal equipment thereof
CN110347858B (en) Picture generation method and related device
CN109002787A (en) Image processing method and device, storage medium, electronic equipment
CN107622117A (en) Image processing method and device, computer equipment, computer-readable recording medium
US10636122B2 (en) Method, device and nonvolatile computer-readable medium for image composition
CN107666515A (en) Image processing method and device, computer equipment, computer-readable recording medium
CN108737618B (en) Information processing method and device, electronic equipment and computer readable storage medium
CN108491733A (en) Method and apparatus, storage medium, electronic equipment are recommended in privacy application
CN108256466B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108921086A (en) Image processing method and device, storage medium, electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant