CN107679559B - Image processing method, image processing device, computer-readable storage medium and mobile terminal - Google Patents

Image processing method, image processing device, computer-readable storage medium and mobile terminal Download PDF

Info

Publication number
CN107679559B
CN107679559B CN201710850225.XA CN201710850225A CN107679559B CN 107679559 B CN107679559 B CN 107679559B CN 201710850225 A CN201710850225 A CN 201710850225A CN 107679559 B CN107679559 B CN 107679559B
Authority
CN
China
Prior art keywords
image
clustering
face
information
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710850225.XA
Other languages
Chinese (zh)
Other versions
CN107679559A (en
Inventor
柯秀华
曹威
王俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710850225.XA priority Critical patent/CN107679559B/en
Publication of CN107679559A publication Critical patent/CN107679559A/en
Priority to PCT/CN2018/101502 priority patent/WO2019052316A1/en
Application granted granted Critical
Publication of CN107679559B publication Critical patent/CN107679559B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application relates to an image processing method, an image processing device, a computer readable storage medium and a mobile terminal. The method comprises the following steps: when an image clustering request is received, acquiring a target face image corresponding to the image clustering request; acquiring first face characteristic information in the target face image; clustering the target face images according to the first face feature information; the receiving an image clustering request comprises: when the feature recognition model is updated, receiving a first image clustering request; when first clustering information sent by a server is received, a second image clustering request is received, wherein the first clustering information is clustering information of a second facial image set uploaded by the server to the mobile terminal; and when a new image is detected, receiving a third clustering request.

Description

Image processing method, image processing device, computer-readable storage medium and mobile terminal
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a computer-readable storage medium, and a mobile terminal.
Background
With the rapid development of the intelligent mobile terminal, the functions of the intelligent mobile terminal are more and more complete, and the performance in intelligent movement is more and more perfect. After the user takes a picture by adopting the intelligent mobile terminal, the intelligent mobile terminal can upload the picture taken by the user to the server, so that the server can classify the picture according to the picture information. For example, images are classified according to image time information, image location information, or face information included in the images, and the associated images are displayed in groups, so that a user can view the images in different categories.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a computer readable storage medium and a mobile terminal, which can cluster images according to face feature information.
An image processing method comprising:
when an image clustering request is received, acquiring a target face image corresponding to the image clustering request;
acquiring first face characteristic information in the target face image;
clustering the target face images according to the first face feature information;
the receiving an image clustering request comprises:
when the feature recognition model is updated, receiving a first image clustering request;
when first clustering information sent by a server is received, a second image clustering request is received, wherein the first clustering information is clustering information of a second facial image set uploaded by the server to the mobile terminal;
and when a new image is detected, receiving a third clustering request.
An image processing apparatus comprising:
the first acquisition module is used for acquiring a target face image corresponding to an image clustering request when the image clustering request is received;
the second acquisition module is used for acquiring first face characteristic information in the target face image;
the clustering module is used for clustering the target face image according to the first face characteristic information;
the receiving an image clustering request comprises:
when the feature recognition model is updated, receiving a first image clustering request;
when first clustering information sent by a server is received, a second image clustering request is received, wherein the first clustering information is clustering information of a second facial image set uploaded by the server to the mobile terminal;
and when a new image is detected, receiving a third clustering request.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as set forth above.
A mobile terminal comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the method as described above.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram illustrating an exemplary environment in which an image processing method may be implemented;
fig. 2 is a timing diagram illustrating interaction between the mobile terminal 110 and the first server 120 and the second server 130 in fig. 1 according to an embodiment;
FIG. 3 is a flow diagram of a method of image processing in one embodiment;
FIG. 4 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 5 is a block diagram showing the construction of an image processing apparatus according to another embodiment;
fig. 6 is a block diagram of a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first acquisition module may be referred to as a second acquisition module, and similarly, a second acquisition module may be referred to as a first acquisition module, without departing from the scope of the present disclosure. The first acquisition module and the second acquisition module are both acquisition modules, but they are not the same acquisition module.
Fig. 1 is a schematic diagram of an application environment of an image processing method in an embodiment. As shown in fig. 1, the application environment includes a mobile terminal 110, a first server 120, and a second server 130. Images may be stored in the memory of the mobile terminal 110 and an SD (Secure digital memory Card) Card. The mobile terminal 110 may perform face recognition on the image and extract a face image included in the stored image. The mobile terminal 110 may extract face feature information in the face image according to the feature recognition model, and perform similarity matching on the extracted face feature information, thereby clustering the face images. The mobile terminal 110 may also upload the facial image to the first server 120, where the first server 120 extracts facial feature information from the facial image according to the feature recognition model, and uploads the extracted facial feature information to the second server 130, and the second server 130 may cluster the facial feature information uploaded by the first server 120. The second server 130 may return the result of clustering the face feature information to the mobile terminal 110, so that the mobile terminal 110 compares the result of clustering the face images by the second server 130 with the result of clustering the face images by the mobile terminal 110. Wherein, the feature recognition models of the mobile terminal 110 and the second server 130 may be the same or different.
In one embodiment, the first server 120 and the second server 130 may be the same server, i.e., the mobile terminal 110 uploads the facial image to the server. The server performs face feature recognition on the received face image to obtain face feature information, and the server clusters the face feature information and sends the clustering result to the mobile terminal 110.
Fig. 2 is a timing diagram illustrating the interaction between the mobile terminal 110 and the first server 120 and the second server 130 in fig. 1 according to an embodiment. As shown in fig. 2, the process of the mobile terminal 110 interacting with the first server 120 and the second server 130 mainly includes the following steps:
(1) the mobile terminal 110 detects the feature recognition model update, and clusters the face image set in the mobile terminal 110.
When the mobile terminal 110 detects that the feature recognition model in the mobile terminal 110 is updated, the face image stored in the mobile terminal 110 can be acquired, and the updated feature recognition model is used for extracting the face feature information in the face image and clustering the face feature information.
(2) The mobile terminal 110 detects a new image of the mobile terminal 110, and clusters the new image.
When the mobile terminal 110 detects that a new image contains a face, extracting face feature information in the new image, matching the face feature information of the new image with face feature information of a clustered face image, and clustering the new image.
(3) The mobile terminal 110 uploads the face image to the first server 120.
The mobile terminal 110 may upload a face image included in an image stored in the mobile terminal 110 to the first server 120. The mobile terminal 110 may upload the face image included in the memory storage image to the first server 120, the mobile terminal 110 may also upload the face image included in the SD card storage image to the first server 120, and the mobile terminal 110 may also upload the memory storage image and the face image included in the SD card storage image to the first server 120.
(4) The first server 120 extracts face feature information in the face image.
After receiving the face image uploaded by the mobile terminal 110, the first server 120 may extract face feature information from the face image according to the feature recognition model. The face feature recognition model in the mobile terminal 110 may be the same as or different from the face feature recognition model in the server.
(5) The first server 120 transmits the facial feature information to the second server 130.
The first server 120 sends the acquired facial feature information to the second server 130, so that the second server 130 can perform clustering according to the facial feature information.
(6) The mobile terminal 110 sends a clustering request to the second server 130.
After the mobile terminal 110 uploads the facial image, it may send a clustering request to the second server 130.
(7) The second server 130 clusters the face feature information.
If the second server 130 receives the facial feature information sent by the first server 120 and the second server 130 receives the clustering request sent by the mobile terminal 110, the second server 130 may cluster the facial feature information. The clustering of the face feature information by the second server 130 includes: and carrying out similarity matching on the face feature information, and if the similarity exceeds a specified value, dividing the face feature information into a group. The algorithm of the mobile terminal 110 to cluster the face feature information may be the same as or different from the algorithm of the second server 130 to cluster the face feature information.
(8) The second server 130 returns the clustering result to the mobile terminal 110.
The second server 130 may send the clustering result to the mobile terminal 110 after the face feature information is clustered.
(9) The mobile terminal 110 updates the clustering information of the images in the mobile terminal 110 according to the clustering result sent by the second server 130, and clusters the facial images that are not uploaded.
After receiving the clustering result sent by the second server 130, the mobile terminal 110 may cluster the facial images uploaded to the first server 120 according to the clustering result. The mobile terminal 110 may also obtain facial feature information of the facial images that are not uploaded, and cluster the facial images that are not uploaded.
In the embodiment of the application, both the mobile terminal and the server can perform feature recognition on the face image to acquire face feature information, and face clustering is performed on the face image according to the face feature information. FIG. 3 is a flow diagram of a method of image processing in one embodiment. As shown in fig. 3, an image processing method applied to a mobile terminal includes:
step 302, when receiving an image clustering request, obtaining a target face image corresponding to the image clustering request.
After receiving the image clustering request, the mobile terminal can acquire a target face image corresponding to the image clustering request. The step of receiving the image clustering request by the mobile terminal comprises the following steps:
(1) when the feature recognition model is updated, a first image clustering request is received.
(2) And when first clustering information sent by the server is received, receiving a second image clustering request, wherein the first clustering information is clustering information of a second facial image set uploaded by the server to the mobile terminal.
(3) And when a new image is detected, receiving a third clustering request.
When the mobile terminal detects that the feature recognition model is updated, a first image clustering request is triggered; the mobile terminal triggers a second image clustering request when receiving the image clustering information issued by the server; and when detecting that the new image exists, the mobile terminal triggers a third image clustering request. The feature recognition model can be used for extracting face feature information from the face image, the mobile terminal can cluster the face image according to the extracted face feature information, when the feature recognition model is updated, the server can send a new version of feature recognition model to the mobile terminal, the mobile terminal compares the local feature recognition model with the feature recognition model issued by the server, if the version number of the local feature recognition model is lower than the version number of the feature recognition model issued by the server, the mobile terminal installs the new version of feature recognition model and updates the feature recognition model according to the new version of feature recognition model issued by the server. After receiving the face image uploaded by the mobile terminal, the server performs feature recognition on the face image according to the feature recognition model to obtain face feature information, and then clusters the face feature information, wherein an image clustering result sent to the mobile terminal by the server is first clustering information issued by the server. Wherein, the characteristic recognition model of the mobile terminal and the characteristic recognition model of the server can be the same or different. The face image uploaded by the mobile terminal is stored in the mobile terminal memory. The image clustering information comprises an image ID (Identification card) and an image grouping mark, and after receiving the image clustering information, the mobile terminal searches for images according to the image ID and divides the images into corresponding groups according to the image grouping mark.
After receiving the image clustering request, the mobile terminal can obtain a target face image corresponding to the image clustering request. The target face image is the face image to be clustered.
When the feature recognition model is updated, namely the algorithm for extracting the face feature information from the face image is updated, the mobile terminal acquires all images stored in the current mobile terminal, including the memory image and the SD card storage image. And carrying out face scanning on the memory image and the SD card storage image, and identifying the face image contained in the memory image and the SD card storage image. The face scanning refers to recognizing a face from an image according to a face recognition algorithm to obtain a face image. When the mobile terminal receives the first clustering information issued by the server, namely the mobile terminal receives the grouping information of the facial images issued by the server, the mobile terminal acquires the SD card storage image, performs facial scanning on the SD card storage image, and identifies the facial image contained in the SD card storage image. Before the server issues the image clustering information, the mobile terminal can upload the face images contained in the memory images to the server, the server performs feature recognition on the obtained face images according to the feature recognition model to obtain face feature information in the face images, clusters the face images according to the face feature information, and sends the image clustering information to the mobile terminal. When the mobile terminal adds the new image, the mobile terminal obtains the new image, carries out face scanning on the new image and identifies the face image contained in the new image.
When the feature recognition model is updated, the target face image is a first face image set in the mobile terminal, namely the face images stored in the memory of the mobile terminal and the SD card. When a first clustering request sent by a server is received, the target face image is a face image in the first face image set except the second face image set. And when the newly added image is detected, carrying out face recognition on the newly added image, and if the newly added image contains a face, taking the newly added image as a target face image.
And step 304, acquiring first face feature information in the target face image.
After the target face image is obtained, the mobile terminal can identify the features of the target face image according to the feature identification model and extract first face feature information in the target face image. The face feature information refers to information for identifying a unique face, and includes contour features, facial features, and the like of the face.
And step 306, clustering the target face images according to the first face feature information.
After the first face feature information of the target face image is obtained, the target face image can be clustered according to the first face feature information. When the feature recognition model is updated, the mobile terminal directly carries out similarity matching on the first face feature information of the first face image set, and the first face feature information is clustered according to a matching result. After receiving the first clustering information sent by the server, the mobile terminal can perform similarity matching on the first facial feature information of the target facial image and the facial feature information of the second facial image set, and clustering the first facial feature information according to a matching result. When a newly added image is detected, the mobile terminal can perform similarity matching on the first face feature information of the target face image and the face feature information of the clustered face image, and clustering the first face feature information according to a matching result.
According to the image processing method in the embodiment of the application, when different image clustering requests are received, the corresponding target face images are obtained according to the image clustering requests, different image clustering requests are realized to obtain different target face images, not all images in the mobile terminal are obtained, the resources of the mobile terminal are saved, and the resource consumption of the mobile terminal is reduced.
In one embodiment, receiving the first image clustering request, and acquiring the first facial feature information in the target facial image includes: performing face recognition on the target face image according to the updated feature recognition model to acquire first face feature information; or converting the stored face feature information of the target face image into first face feature information according to the feature information conversion model.
When the feature recognition model in the mobile terminal is updated, namely the algorithm for extracting the face feature information from the face image is updated, the mobile terminal can acquire the set of the face images in the stored image as the target face image, namely the set of the face images in the memory image in the mobile terminal and the SD card image as the target face image.
The mobile terminal can perform face recognition on the target face image by using the updated feature recognition model to acquire first face feature information in the target face image. The face feature information extracted by the updated feature recognition model is inconsistent with the face feature information extracted by the original feature recognition model, and after the first face feature information in the target face is re-extracted by the mobile terminal by adopting the updated feature recognition model, the target face images can be clustered according to the re-extracted first face feature information.
In one embodiment, after the feature recognition model in the mobile terminal is updated, the mobile terminal may convert the stored face feature information of the target face image into face feature information corresponding to the updated feature recognition model, that is, convert the stored face feature information of the target face image into the first face feature information. The mobile terminal can re-cluster the target face images according to the first face feature information.
According to the image processing method in the embodiment of the application, after the feature recognition model in the mobile terminal is updated, the face feature information of the face image is extracted again. After the feature recognition model is updated, the face images are clustered in time, and the timeliness of face image clustering is improved.
In one embodiment, when receiving the second image clustering request, acquiring the target face image corresponding to the image clustering request includes: acquiring face images in the first face image set except the second face image set as target face images; clustering the target face images according to the first face feature information comprises: and matching the first face feature information with second face feature information in a second image set, if the similarity exceeds a preset value, acquiring a first image corresponding to the second face feature information, and dividing the target face image into cluster groups of the first image corresponding to the first cluster information.
The mobile terminal can upload the face images in the memory images to the server, and the server can perform image clustering on the received face images and send clustering results to the mobile terminal. If the mobile terminal detects that the server issues the image clustering information and the image set corresponding to the server issued image clustering information is not identical to the first facial image set, updating the clustering information of the second facial image set of the mobile terminal according to the server issued image clustering information, and then acquiring facial images in the first facial image set except the second facial image set, namely the facial images in the SD card and the internal storage facial images which are not included in the image clustering information issued by the server. The mobile terminal can obtain the face feature information of a target face image, similarity matching is carried out on the face feature information of the target face image and the face feature information of a second face image set, and if the similarity is larger than a preset threshold value, the target face image is divided into cluster groups corresponding to the face feature information of the second face image set.
According to the image processing method in the embodiment of the application, after the image clustering information sent by the server is received, the face images which are not uploaded to the server are clustered by taking the image clustering information sent by the server as a standard, so that the accuracy of clustering the face images is improved.
In one embodiment, after receiving the image clustering information issued by the server, the mobile terminal updates the local image clustering information according to the image clustering information issued by the server. If the same image is detected, when the image clustering information issued by the server is inconsistent with the local image clustering information, detecting whether the image is provided with a user operation mark in the local image clustering information, if so, storing the local clustering information of the image by the mobile terminal, and uploading the local clustering information of the image to the server, so that the server uses the local clustering information of the image to cover the original image clustering information; if not, the mobile terminal covers the local image clustering information with the image clustering information issued by the cloud.
According to the image processing method in the embodiment of the application, when the image clustering information of the mobile terminal is updated according to the image clustering information issued by the server, if the image clustering results of the same image are different and the clustering result on the mobile terminal is the user operation, the user operation is reserved, and the clustering information of the user operation is uploaded to the server. The method not only ensures the multi-end data synchronization of the mobile terminal and the server, avoids data confusion, but also keeps the user operation and improves the user viscosity.
In one embodiment, when receiving the third image clustering request, the obtaining the target face image corresponding to the image clustering request includes: and carrying out face recognition on the newly added image, and if the newly added image contains a face, taking the newly added image containing the face as a target face image.
When the mobile terminal detects a newly added image, the mobile terminal performs face scanning on the newly added image to identify whether the newly added image is a face image, and if so, obtains face feature information in the newly added image; and if not, the newly added image is not processed. After face feature information in the newly added image is obtained, similarity comparison is carried out on the face feature information of the clustered image of the face feature information of the newly added image, and if the similarity exceeds a preset value, the newly added image is divided into image clusters corresponding to the clustered image.
According to the information processing method in the embodiment of the application, when the new image of the mobile terminal is detected, the face feature information of the new image is compared with the face feature information of the clustered image, so that the timeliness of clustering the new image is guaranteed.
In one embodiment, when the mobile terminal stored image is deleted, the mobile terminal detects whether the image has image clustering information, and if so, deletes the image clustering information corresponding to the image. For example, the first Group1 of the mobile terminal includes image 1.jpg, and the second Group2 of the mobile terminal also includes image 1.jpg, when image 1.jpg is deleted from the mobile terminal, image 1.jpg is deleted from the first Group1 and the second Group correspondingly, i.e. image 1.jpg is no longer displayed in the first Group1 and the second Group2 of the mobile terminal album.
In one embodiment, before obtaining the target face image corresponding to the image clustering request, the target face image corresponding to the image clustering request is obtained when at least one of the following conditions is satisfied:
(1) the time difference between the current time and the time of obtaining the target face image last time exceeds the preset time length.
(2) The current moment is a preset moment.
(3) The mobile terminal is in a charging state.
The method comprises the steps that after the mobile terminal receives an image clustering request, whether current conditions meet preset conditions or not can be detected, and if the current conditions meet the preset conditions, a target face image corresponding to the image clustering request is obtained; and if the current condition is detected not to meet the preset condition, determining the image to be scanned according to the type of face scanning when the mobile terminal meets the preset condition.
In one embodiment, after receiving the image clustering request, the mobile terminal detects whether the current condition meets a preset condition, and if not, the mobile terminal detects whether the current condition meets the preset condition according to a preset time interval.
In one embodiment, the mobile terminal detects whether the current condition meets a preset condition according to a preset time interval, if so, detects whether the mobile terminal receives an image clustering request, and if the mobile terminal receives the image clustering request, acquires a target face image corresponding to the image clustering request.
The preset conditions include: the method comprises the steps of obtaining the time when a mobile terminal obtains a target face image last time, and detecting that the time difference between the current time and the time when the target face image is obtained last time exceeds preset time. For example, the preset time is 48 hours, the current time is 10: 18 points in 8/11/2017, the time for obtaining the target face image last time is 9: 5 points in 8/73 h, and the time difference is 73 hours and 13 minutes, which exceeds the preset time by 48 hours, so that the mobile terminal meets the preset condition, and the target face image corresponding to the image clustering request is obtained. The mobile terminal can also detect whether the current time is a preset time, and if the current time is the preset time, a target face image corresponding to the image clustering request is obtained. For example, the preset time is 2:00AM to 5:00AM, and when the mobile terminal detects that the current time is 3:15AM, the target face image corresponding to the image clustering request is obtained. When the mobile terminal is currently in a charging state, the mobile terminal can also obtain a target face image corresponding to the image clustering request.
According to the image processing method in the embodiment of the application, before the mobile terminal obtains the target face image corresponding to the image clustering request, whether the mobile terminal meets the preset condition or not is judged, and the target face image corresponding to the image clustering request is obtained when the preset condition is met. The mobile terminal consumes longer time for image clustering, occupies the CPU resource of the mobile terminal and consumes the electric quantity of the mobile terminal. When the mobile terminal is charged, the image clustering of the mobile terminal is started, so that the condition that the electric quantity of the mobile terminal is too fast to be consumed can be avoided. The mobile terminal image clustering is started when the mobile terminal is at the preset moment, so that the situation that the mobile terminal is blocked due to the fact that a large amount of CPU resources are occupied by the mobile terminal image clustering can be avoided. And image clustering is carried out when the time difference from the last image clustering exceeds the specified time length, so that the timeliness of the image clustering is ensured.
In one embodiment, the image processing method includes: and when detecting that the face information in the target face image is changed, re-acquiring the first face characteristic information in the target face image, and clustering the target face image according to the first face characteristic information.
The mobile terminal can identify the face state of the face identification in the face image, and after receiving the image clustering information sent by the server, the mobile terminal can adjust the face state in the face image in the mobile terminal according to the image clustering information sent by the server. When the mobile terminal detects that the face state in the target face image changes, for example, the face state changes from display to hiding or from hiding to display, the mobile terminal may reacquire the first face feature information in the target face image, and perform clustering on the target face image according to the reacquired first face feature information.
According to the image processing method in the embodiment of the application, when the change of the face information in the face image is detected, the face characteristic information in the face image is obtained again, the face image is clustered again according to the face characteristic information in the face image, and the timeliness of the face image clustering can be realized.
FIG. 4 is a block diagram showing an example of the structure of an image processing apparatus. As shown in fig. 4, an image processing apparatus includes:
a first obtaining module 402, configured to, when an image clustering request is received, obtain a target face image corresponding to the image clustering request.
And a second obtaining module 404, configured to obtain first face feature information in the target face image.
And the clustering module 406 is configured to cluster the target face images according to the first face feature information.
Receiving the image clustering request includes: when the feature recognition model is updated, receiving a first image clustering request; when first clustering information sent by a server is received, a second image clustering request is received, wherein the first clustering information is clustering information of a second facial image set uploaded by the server to the mobile terminal; and when a new image is detected, receiving a third clustering request.
In one embodiment, when receiving the first image clustering request, the obtaining the first facial feature information in the target facial image includes: performing face recognition on the target face image according to the updated feature recognition model to acquire first face feature information; or converting the stored face feature information of the target face image into first face feature information according to the feature information conversion model.
In one embodiment, when receiving the second image clustering request, acquiring the target face image corresponding to the image clustering request includes: acquiring face images in the first face image set except the second face image set as target face images; clustering the target face images according to the first face feature information comprises: and matching the first face feature information with second face feature information in a second image set, if the similarity exceeds a preset value, acquiring a first image corresponding to the second face feature information, and dividing the target face image into cluster groups of the first image corresponding to the first cluster information.
In one embodiment, when receiving the third image clustering request, the obtaining the target face image corresponding to the image clustering request includes: and carrying out face recognition on the newly added image, and if the newly added image contains a face, taking the newly added image containing the face as a target face image.
Fig. 5 is a block diagram showing the configuration of an image processing apparatus according to another embodiment. As shown in fig. 5, an image processing apparatus includes a first obtaining module 502, a second obtaining module 504, a clustering module 506, and a processing module 508. The first obtaining module 502, the second obtaining module 504, and the clustering module 506 have the same functions as the corresponding modules.
The first obtaining module 502 is further configured to obtain second clustering information of the mobile terminal on the second face set.
And the processing module 508 is configured to, when the comparison result of the first clustering information is different from that of the second clustering information, correspondingly process the second clustering information according to the type of the comparison result.
In an embodiment, the first obtaining module 502 is further configured to obtain a target face image corresponding to the image clustering request if a time difference between a current time and a time of obtaining the target face image last time exceeds a preset time length; or if the current time is a preset time, acquiring a target face image corresponding to the image clustering request; or, the mobile terminal is in a charging state, and a target face image corresponding to the image clustering request is obtained.
In one embodiment, the second obtaining module 504 is further configured to obtain the first facial feature information of the target human face image again when the change of the human face information of the target human face image is detected. The clustering module 506 is configured to cluster the target face images according to the first face feature information.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the image processing method as described above.
The embodiment of the application also provides the mobile terminal. As shown in fig. 6, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the mobile terminal as the mobile phone as an example:
fig. 6 is a block diagram of a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present application. Referring to fig. 6, the handset includes: radio Frequency (RF) circuit 610, memory 620, input unit 630, display unit 640, sensor 650, audio circuit 660, wireless fidelity (WiFi) module 670, processor 680, and power supply 690. Those skilled in the art will appreciate that the handset configuration shown in fig. 6 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 610 may be used for receiving and transmitting signals during information transmission or communication, and may receive downlink information of the base station and then process the downlink information to the processor 680; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 610 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 620 may be used to store software programs and modules, and the processor 680 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 620. The memory 620 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 600. Specifically, the input unit 630 may include a touch panel 631 and other input devices 632. The touch panel 631, which may also be referred to as a touch screen, may collect touch operations performed by a user on or near the touch panel 631 (e.g., operations performed by the user on or near the touch panel 631 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 680, and can receive and execute commands sent by the processor 680. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 630 may include other input devices 632 in addition to the touch panel 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), and the like.
The display unit 640 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 640 may include a display panel 641. In one embodiment, the Display panel 641 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, the touch panel 631 can cover the display panel 641, and when the touch panel 631 detects a touch operation thereon or nearby, the touch panel is transmitted to the processor 680 to determine the type of the touch event, and then the processor 680 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in fig. 6, the touch panel 631 and the display panel 641 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 631 and the display panel 641 may be integrated to implement the input and output functions of the mobile phone.
The handset 600 may also include at least one sensor 650, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 641 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 641 and/or the backlight when the mobile phone is moved to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
Audio circuit 660, speaker 661, and microphone 662 can provide an audio interface between a user and a cell phone. The audio circuit 660 may transmit the electrical signal converted from the received audio data to the speaker 661, and convert the electrical signal into an audio signal through the speaker 661 for output; on the other hand, the microphone 662 converts the collected sound signal into an electrical signal, which is received by the audio circuit 660 and converted into audio data, which is then processed by the audio data output processor 680 and then transmitted to another mobile phone via the RF circuit 610, or the audio data is output to the memory 620 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 670, and provides wireless broadband Internet access for the user. Although fig. 6 shows WiFi module 670, it is understood that it is not an essential component of handset 600 and may be omitted as desired.
The processor 680 is a control center of the mobile phone, and connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby performing overall monitoring of the mobile phone. In one embodiment, processor 680 may include one or more processing units. In one embodiment, processor 680 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 680.
The handset 600 also includes a power supply 690 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 680 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In one embodiment, the handset 600 may also include a camera, a bluetooth module, and the like.
In the embodiment of the present application, the image processing method as described above is implemented when the processor 680 included in the mobile terminal executes a computer program stored on a memory.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, comprising:
when an image clustering request is received, acquiring a target face image corresponding to the image clustering request;
acquiring first face characteristic information in the target face image;
clustering the target face images according to the first face feature information;
the receiving an image clustering request comprises:
when the feature recognition model is updated, receiving a first clustering request;
when first clustering information sent by a server is received, a second clustering request is received, wherein the first clustering information is clustering information of a second facial image set uploaded by the server to the mobile terminal; when a new image is detected, receiving a third clustering request;
when the image clustering request is the first clustering request, the target facial image comprises a first facial image set; the first facial image is combined into a set of facial images stored in an SD card and a memory;
when the image clustering request is the second clustering request, the target face image comprises face images in the first face image set except for the second face image set; the second face image set is a face image set of the memory image;
and when the image clustering request is the third clustering request, the target face image is a set of face images of the newly added images.
2. The method according to claim 1, wherein when receiving the first clustering request, the obtaining first facial feature information in the target facial image comprises:
performing face recognition on the target face image according to the updated feature recognition model to acquire first face feature information;
or converting the stored face feature information of the target face image into the first face feature information according to a feature information conversion model.
3. The method according to claim 1, wherein when receiving the second clustering request, the obtaining the target face image corresponding to the image clustering request comprises:
acquiring the face images in the first face image set except the second face image set as the target face images;
the clustering the target face image according to the first face feature information includes:
and matching the first face feature information with second face feature information in a second image set, if the similarity exceeds a preset value, acquiring a first image corresponding to the second face feature information, and dividing the target face image into cluster groups of the first image corresponding to the first cluster information.
4. The method according to claim 1, wherein when receiving the third clustering request, the obtaining the target face image corresponding to the image clustering request comprises:
and carrying out face recognition on the newly added image, and if the newly added image contains a face, taking the newly added image containing the face as the target face image.
5. The method of claim 1, further comprising:
acquiring second clustering information of the mobile terminal to the second face image set;
when the comparison results of the first clustering information and the second clustering information are different, the second clustering information is correspondingly processed according to the type of the comparison results;
the processing the second clustering information according to the type of the comparison result comprises:
detecting whether the second clustering information carries a user operation identifier, and if so, saving the second clustering information; and if not, covering the second clustering information with the first clustering information.
6. The method according to any one of claims 1 to 5, wherein before the obtaining of the target face image corresponding to the image cluster request, the method further comprises:
if the time difference between the current time and the last time of acquiring the target face image exceeds a preset time length, acquiring the target face image corresponding to the image clustering request;
or if the current time is a preset time, acquiring a target face image corresponding to the image clustering request;
or, the mobile terminal is in a charging state, and a target face image corresponding to the image clustering request is obtained.
7. The method according to any one of claims 1 to 5, further comprising:
when detecting that the face information in the target face image is changed;
re-acquiring first face feature information in the target face image;
and clustering the target face images according to the first face feature information.
8. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring a target face image corresponding to an image clustering request when the image clustering request is received;
the second acquisition module is used for acquiring first face characteristic information in the target face image;
the clustering module is used for clustering the target face image according to the first face characteristic information;
the receiving an image clustering request comprises:
when the feature recognition model is updated, receiving a first clustering request;
when first clustering information sent by a server is received, a second clustering request is received, wherein the first clustering information is clustering information of a second facial image set uploaded by the server to the mobile terminal;
when a new image is detected, receiving a third clustering request;
when the image clustering request is the first clustering request, the target facial image comprises a first facial image set; the first facial image is combined into a set of facial images stored in an SD card and a memory;
when the image clustering request is the second clustering request, the target face image comprises face images in the first face image set except for the second face image set; the second face image set is a face image set of the memory image;
and when the image clustering request is the third clustering request, the target face image is a set of face images of the newly added images.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. A mobile terminal comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the method of any one of claims 1 to 7.
CN201710850225.XA 2017-09-15 2017-09-15 Image processing method, image processing device, computer-readable storage medium and mobile terminal Expired - Fee Related CN107679559B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710850225.XA CN107679559B (en) 2017-09-15 2017-09-15 Image processing method, image processing device, computer-readable storage medium and mobile terminal
PCT/CN2018/101502 WO2019052316A1 (en) 2017-09-15 2018-08-21 Image processing method and apparatus, computer-readable storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710850225.XA CN107679559B (en) 2017-09-15 2017-09-15 Image processing method, image processing device, computer-readable storage medium and mobile terminal

Publications (2)

Publication Number Publication Date
CN107679559A CN107679559A (en) 2018-02-09
CN107679559B true CN107679559B (en) 2020-01-10

Family

ID=61136878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710850225.XA Expired - Fee Related CN107679559B (en) 2017-09-15 2017-09-15 Image processing method, image processing device, computer-readable storage medium and mobile terminal

Country Status (2)

Country Link
CN (1) CN107679559B (en)
WO (1) WO2019052316A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679559B (en) * 2017-09-15 2020-01-10 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN109101542B (en) * 2018-07-02 2021-02-02 深圳市商汤科技有限公司 Image recognition result output method and device, electronic device and storage medium
CN111222358B (en) * 2018-11-23 2024-02-13 杭州海康威视数字技术股份有限公司 Face static detection method and system
CN109886239B (en) * 2019-02-28 2021-05-04 北京旷视科技有限公司 Portrait clustering method, device and system
CN110266953B (en) * 2019-06-28 2021-05-07 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, server, and storage medium
CN110866443B (en) * 2019-10-11 2023-06-16 厦门身份宝网络科技有限公司 Portrait storage method, face recognition equipment and storage medium
CN111027406B (en) * 2019-11-18 2024-02-09 惠州Tcl移动通信有限公司 Picture identification method and device, storage medium and electronic equipment
CN111091106B (en) * 2019-12-23 2023-10-10 浙江大华技术股份有限公司 Image clustering method and device, storage medium and electronic device
CN113128305A (en) * 2019-12-31 2021-07-16 深圳云天励飞技术有限公司 Portrait archive accumulation evaluation method and device, electronic equipment and storage medium
CN112131999B (en) * 2020-09-17 2023-11-28 浙江商汤科技开发有限公司 Identity determination method and device, electronic equipment and storage medium
CN113920283B (en) * 2021-12-13 2022-03-08 中国海洋大学 Infrared image trail detection and extraction method based on cluster analysis and feature filtering
CN115880745B (en) * 2022-09-07 2023-07-04 以萨技术股份有限公司 Data processing system for acquiring facial image characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542299A (en) * 2011-12-07 2012-07-04 惠州Tcl移动通信有限公司 Face recognition method, device and mobile terminal capable of recognizing face
CN102804154A (en) * 2009-06-23 2012-11-28 佳能株式会社 Image processing apparatus, control method for image processing apparatus, and program
CN104252628A (en) * 2013-06-28 2014-12-31 广州华多网络科技有限公司 Human face image marking method and system
US9014514B2 (en) * 2000-11-06 2015-04-21 Nant Holdings Ip, Llc Image capture and identification system and process
CN104615233A (en) * 2013-11-01 2015-05-13 索尼电脑娱乐公司 Information processing device and information processing method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073948A (en) * 2012-01-17 2018-05-25 华为技术有限公司 A kind of photo sort management, server, apparatus and system
JP6472279B2 (en) * 2015-03-09 2019-02-20 キヤノン株式会社 Image processing apparatus and image processing method
CN105404863B (en) * 2015-11-13 2018-11-02 小米科技有限责任公司 Character features recognition methods and system
CN105760533B (en) * 2016-03-08 2019-05-03 Oppo广东移动通信有限公司 A kind of photo management method and device
CN106202392A (en) * 2016-07-07 2016-12-07 珠海市魅族科技有限公司 A kind of photo classification method and terminal
CN106980696B (en) * 2017-04-06 2023-06-20 腾讯科技(深圳)有限公司 Photo file classification method and device
CN107679559B (en) * 2017-09-15 2020-01-10 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9014514B2 (en) * 2000-11-06 2015-04-21 Nant Holdings Ip, Llc Image capture and identification system and process
CN102804154A (en) * 2009-06-23 2012-11-28 佳能株式会社 Image processing apparatus, control method for image processing apparatus, and program
CN102542299A (en) * 2011-12-07 2012-07-04 惠州Tcl移动通信有限公司 Face recognition method, device and mobile terminal capable of recognizing face
CN104252628A (en) * 2013-06-28 2014-12-31 广州华多网络科技有限公司 Human face image marking method and system
CN104615233A (en) * 2013-11-01 2015-05-13 索尼电脑娱乐公司 Information processing device and information processing method

Also Published As

Publication number Publication date
CN107679559A (en) 2018-02-09
WO2019052316A1 (en) 2019-03-21

Similar Documents

Publication Publication Date Title
CN107679559B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN107729815B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108320744B (en) Voice processing method and device, electronic equipment and computer readable storage medium
CN107679560B (en) Data transmission method and device, mobile terminal and computer readable storage medium
CN107729889B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107992822B (en) Image processing method and apparatus, computer device, computer-readable storage medium
CN109144232B (en) Process processing method and device, electronic equipment and computer readable storage medium
CN108022274B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
WO2019051795A1 (en) Image processing method and device, terminal, server, and computer-readable storage medium
CN108549698B (en) File processing method and device, mobile terminal and computer readable storage medium
CN112703714A (en) Application program processing method and device, computer equipment and computer readable storage medium
CN108984066B (en) Application icon display method and mobile terminal
CN107729391B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN107194223B (en) Fingerprint identification area display method and related product
CN108256466B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN116486833B (en) Audio gain adjustment method and device, storage medium and electronic equipment
CN107632985B (en) Webpage preloading method and device
CN110753914A (en) Information processing method, storage medium and mobile terminal
CN110717486B (en) Text detection method and device, electronic equipment and storage medium
CN112913267B (en) Resource processing method, device, terminal, server and readable storage medium
CN108228357B (en) Memory cleaning method and mobile terminal
CN108513005B (en) Contact person information processing method and device, electronic equipment and storage medium
CN109992395B (en) Application freezing method and device, terminal and computer readable storage medium
CN108021669B (en) Image classification method and device, electronic equipment and computer-readable storage medium
WO2019051799A1 (en) Image processing method and apparatus, mobile terminal, server, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200110

CF01 Termination of patent right due to non-payment of annual fee