WO2019114508A1 - Procédé de traitement d'image, appareil, support d'informations lisible par ordinateur et dispositif électronique - Google Patents

Procédé de traitement d'image, appareil, support d'informations lisible par ordinateur et dispositif électronique Download PDF

Info

Publication number
WO2019114508A1
WO2019114508A1 PCT/CN2018/116592 CN2018116592W WO2019114508A1 WO 2019114508 A1 WO2019114508 A1 WO 2019114508A1 CN 2018116592 W CN2018116592 W CN 2018116592W WO 2019114508 A1 WO2019114508 A1 WO 2019114508A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
user identity
images
master
Prior art date
Application number
PCT/CN2018/116592
Other languages
English (en)
Chinese (zh)
Inventor
达剑
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019114508A1 publication Critical patent/WO2019114508A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method, a storage medium, and an electronic device.
  • the photo album function has become one of the commonly used applications of electronic devices, and belongs to applications with extremely high user frequency.
  • a large number of images are stored in the album of the electronic device, and the traditional electronic device album has the functions of providing various image browsing and classification.
  • Embodiments of the present application provide an image processing method, apparatus, storage medium, and electronic device.
  • An image processing method comprising:
  • an album having a sequence of images in the set of images is generated.
  • An image processing apparatus comprising:
  • An image collection obtaining module configured to acquire a human face in the image; and acquire an image collection having the same human face in the image;
  • An age value identification module configured to identify an age value corresponding to a face of the image in the image set
  • a sorting information generating module configured to generate, according to the age value, sorting information of an image in the image set
  • an album generating module configured to generate, according to the sorting information and the image set, a photo album having a sequence of images in the image set.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the operations of the image processing method described in any of the embodiments herein.
  • An electronic device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor executing the computer program to implement the operation of the image processing method according to any of the embodiments of the present application .
  • the above image processing method, apparatus, storage medium, and electronic device acquire a face in an image; acquire an image set having the same face in the image, and identify an age value corresponding to the face of the image in the image set, and then follow each
  • the age value of the face in the image generates sorting information for the image, and according to the sorting information and the image set, generates a time-series album including the image in the image set, so that the image in the album can be displayed according to the time series, thereby improving
  • the user's use of viscosity also increases the flexibility to classify and display images.
  • FIG. 1 is an application scenario diagram of an image processing method in an embodiment.
  • FIG. 2 is a schematic diagram showing the internal structure of an electronic device in an embodiment.
  • 3 is a flow chart of an image processing method in one embodiment.
  • 4A is a schematic diagram showing an image in an album in one embodiment.
  • 4B is a schematic diagram showing an image in an album in another embodiment.
  • 4C is a schematic diagram of dividing an image and displaying an image in an album in one embodiment.
  • Figure 5 is a flow diagram of forming an image collection in one embodiment.
  • FIG. 6 is a flow diagram of determining a master face from a face area and/or a face definition occupied by each face in an image in one embodiment.
  • FIG. 7 is a diagram showing, in an embodiment, when it is recognized that a plurality of faces are included in an image, whether the remaining faces in the detected image correspond to existing user identities, and if yes, the remaining faces are classified into corresponding existing user identities. In the flowchart of obtaining the user identity corresponding to the remaining faces.
  • Figure 8 is a flow diagram of the processing of a suspected passerby face in one embodiment.
  • Figure 9 is a flow diagram of the division of image sets for new images in one embodiment.
  • Figure 10 is a block diagram showing the structure of an image processing apparatus in an embodiment.
  • Figure 11 is a block diagram showing the structure of an image processing apparatus in another embodiment.
  • Figure 12 is a schematic illustration of an image processing circuit in one embodiment.
  • an application scenario diagram of an image processing method is provided, where the application environment includes an electronic device 110 and a server 120.
  • the terminal 110 and the server 120 are connected through a network.
  • An image is stored in the electronic device 110, and the image may be stored in the internal memory of the electronic device 110 or may be stored in an SD (Secure Digital Memory Card) card built into the electronic device 110.
  • the server 120 may also store an image, and the image stored on the server 120 may be an image stored by the electronic device in the cloud.
  • the electronic device can process images stored locally or in the cloud, such as forming a collection of images from the image, or sorting and displaying the images in the collection of images.
  • the electronic device includes a processor, a memory, and a display screen connected by a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, etc., and at least one computer program is stored on the memory, and the computer program can be executed by the processor to implement an image processing method suitable for an electronic device provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (ROM), or a random storage memory (Random-Access-Memory, RAM).
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor for implementing an image processing method provided by the various embodiments below.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the display screen can be a touch screen, such as a capacitive screen or an electronic screen, for displaying visual information such as an image, and can also be used to detect a touch operation applied to the display screen to generate a corresponding command.
  • the electronic device can be a cell phone, a tablet or a personal digital assistant or a wearable device.
  • FIG. 2 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computer device to which the solution of the present application is applied.
  • the specific computer device may It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • the electronic device can further include a camera through which an image can be generated and processed.
  • an image processing method is provided. This embodiment is mainly applied to the electronic device shown in FIG. 1 as an example.
  • the method includes:
  • Operation 302 obtaining a face in the image.
  • the image is an image including a human face
  • the electronic device may acquire an image from a multi-user family shared album in the local device and/or other devices, and may also be an image shared by other devices, and the shared image is The multi-user family with access rights of the electronic device shares the image in the album, and recognizes and acquires the face in the image.
  • the electronic device may receive an image division instruction that may specify a corresponding one or more images and acquire the division instruction designation image. It is also possible to automatically acquire an image, such as an image that can be acquired in a state of being charged by the screen, or automatically acquire an image in a state where the resource usage rate on the electronic device is low and/or the power exceeds the preset power threshold.
  • the resource usage includes, but is not limited to, one or more of CPU usage and memory usage. For the acquired image, the face in the image can be further obtained.
  • the face in the image is automatically acquired.
  • Operation 304 acquiring a set of images having the same face in the image.
  • the image set is a set of images formed according to a face, and the image in each image set is an image containing the same face.
  • the electronic device presets a correspondence between the face and the image set. According to the acquired face, the image set having the same face can be acquired from the image according to the preset correspondence.
  • the corresponding image collection may be automatically acquired.
  • the change of the image includes an increase and/or decrease of the image, and may also include modifying the pixels in the existing image, such as performing beauty treatment on the existing image.
  • Operation 306 identifying an age value corresponding to a face of the image in the image collection.
  • the age value indicates the size of the age presented by the face in the electronically recognized image.
  • the electronic device may identify a face region of the face in the image in the image corresponding to each image in the image set, and identify feature data of the age-related portion of the face of the person.
  • the part may include a plurality of pieces, and the analysis is performed according to the feature data of each part, and the age value corresponding to the face is calculated.
  • the part includes a part such as a pupil, an eye corner, a corner of the mouth, a nose, and the like which obviously change with age.
  • the age value corresponding to the face in the images 1 to 4 can be identified according to the above-mentioned part, for example, the age value of the face in the image 1 in FIG. 4A is recognized as 8 years old, in the image 2
  • the age of the face is 16 years old
  • the age of the face in image 3 is 25 years old
  • the age of the face in image 4 is 40 years old.
  • Operation 308 generating ranking information for the images in the image collection based on the age value.
  • the sorting information represents information for sorting and displaying the images in the image set
  • the sorting information includes an order of sorting each image in the image set in the image set, so that the order of the sorting may be performed according to the order of the sorting.
  • Sort display The sorting information may be sorted information according to age values from small to large, and may also be sorted information according to age values from large to small. Taking the sorting information from small to large according to the age value as an example, the smaller the age value in the image, the more the order of display of the corresponding image in the image set is in the process of playing or displaying the image in the image set. before.
  • Operation 310 based on the sorting information and the image set, generate a photo album having a sequence including images in the image set.
  • the electronic device can gather images of the same face in the image collection in one place to form a corresponding album. And based on the sorting information, the order in which the respective images are displayed in the album is generated, thereby forming an album having a time series.
  • the images in the image set may be sorted from large to small according to the age value of the same face, or sorted from large to small, and the generated time series is an album sorted by an age order or an age reversed image.
  • the image processing method described above by acquiring a face in an image; acquiring an image set having the same face in the image, and identifying an age value corresponding to the face of the image in the image set, and then following the face of each image
  • the age value generates sorting information for the image, and according to the sorting information and the image set, generates a time-series album including the image in the image set, so that the image in the album can be displayed according to the timing, providing a brand-new image classification
  • the way to classify and view images is more, which improves the user's viscosity and improves the flexibility of classifying and displaying images.
  • the timing includes sequential ordering and/or reverse ordering according to age values
  • the album with time series includes files of any form in a slide, a movie or an album. You can sort from large to small, or from small to large, depending on the age.
  • the electronic device may receive a play instruction for the image in the album with time series, obtain the timing of the corresponding album according to the play instruction, and sequentially display the images in the album according to the timing.
  • the electronic device can provide a virtual button for playing an image in the album, which can be triggered when a click operation on the virtual button is received. It is also possible to preset an open voice message for triggering a play command. The corresponding voice information is received by calling the voice receiving device, and when the voice information is detected to match the open voice information, the play command may also be triggered.
  • the timing of the pre-generated album may be acquired, and the images in the album are sequentially displayed according to the timing.
  • the image in the album may be played in a slide show according to the play command, or a short film or album may be generated for the user to view.
  • the album 1 includes images 1 to 4; and the album 2 includes images 5 to 7.
  • Each album is also set with a corresponding timing.
  • the timing of the images in the album 1 is sorted in the order of the images 1 to 4, and the album 2 is sorted in the order of the images 5 to 7.
  • the electronic device can play the images in the album with time series according to the sorting to show the evolution of the characters in an album from small to large or from large to small.
  • operation 306 includes generating ranking information for the plurality of images based on the generation times of the plurality of images when the age values of the faces are the same among the plurality of images of the same image set.
  • the generation time of the multiple images may be further compared.
  • the plurality of images are sorted according to the generation time. For example, it can be sorted according to the generation time from small to large, or from large to small.
  • the ranking information is sorting information according to the age value from small to large
  • the images for the same age value are sorted from morning to night according to the generation time; when the sorting information is according to the age value from large to small
  • sorting information the images of the same age value are sorted from late to early according to the generation time.
  • the sorting information as the sorting information according to the age value from small to large as an example, when there are three images, wherein the face is the face of the user A, and the age value of the face of the user A is determined to be 25 years old. Then, the size of the generation time of the three images can be further compared, and the sorting information of the three images is generated. And for the three images, the image with the earlier generation time is sorted first.
  • the ordering of image sorting is further improved by sorting images of the same age value according to the generation time.
  • the above method further includes an operation of forming an image set, the operation being performed prior to operation 304, as shown in FIG. 5, comprising:
  • Operation 502 performing face recognition on the face in the image, and determining a user identity corresponding to the face in the image.
  • the electronic device can perform face recognition on the image, extract face feature information therein, and identify the user identity of the face according to the face feature information. For example, the electronic device may identify, for the image A, a face in which the face belongs to the user A, and the user identity is the user A.
  • the facial feature information may be further detected to match the face feature information of the existing identified identity of the user, and when matching, It is determined that the user identity corresponding to the face in the image is the matched user identity. When there is no user identity matching the facial feature information, it indicates that the user identity is a newly appearing user identity.
  • a new user identity can be created correspondingly as the user identity corresponding to the face in the image.
  • the face feature information in the image can be directly used as the face feature information corresponding to the user identity.
  • Operation 504 dividing the image into an image collection corresponding to the user identity to form an image collection having the same face.
  • the electronic device sets a corresponding image set for each user identity, and the images in the image set are images having the same user identity.
  • the electronic device can establish a mapping relationship between different user identities and different image sets, so that images of the same user identity are divided into the same image set according to the corresponding relationship to form an image set having the same face.
  • a user identity tag can be set thereon to mark the user identity of the face in the image, and such that in subsequent processing, the flag can be determined
  • the image is an image that has identified the user's identity, avoiding duplicate recognition. And according to the user identity mark, the image is divided into an image set corresponding to the user identity mark.
  • an image set corresponding to the existing user identity may be acquired, and the image is divided into corresponding image sets.
  • the identified user identity is a new user identity
  • a set of images corresponding to the new user identity is created, and the image is divided into the created image collection.
  • the image set A corresponding to the user A is directly acquired, and the image is divided into the image set A.
  • an image set A corresponding to the user A is created, and the image is divided into the image set A, so that the images in the image set A are all included in the user A.
  • the image of the face For example, there is an image 8 as shown in FIG. 4C.
  • the image 8 may be divided into the corresponding FIG. 4B. In the collection of images.
  • the age value corresponding to the face in each image generates sorting information for the image, and generates an album with time series and displays.
  • the order may be sorted according to the age value. For example, when it is recognized that the age value corresponding to the face in the image 8 is between the image 6 and the image 7, the sort order of the image 8 is located in the image. Between 6 and image 7, a time-stamped album of image 5-image 6-image 8-image 7 as shown in Fig. 4C is generated.
  • the user to which the face belongs is determined, and then the image is divided into an image set corresponding to the user, thereby realizing the identity of the user to which the image face belongs.
  • the image division divides the image of the face containing the same user into the same image set, which improves the flexibility of image division.
  • operation 302 includes determining a master face of the plurality of faces when the image includes a plurality of faces.
  • the face in the image may include a plurality of, and when it is recognized that the face includes a plurality of faces, it may further identify which of the faces is the master face in the image.
  • the master face of each image refers to the face of the core person in each image, and the resolution reaches a threshold value to identify the master face of the identity corresponding to the face.
  • the core character refers to the person corresponding to the owner's face in the image or the character corresponding to the owner's face in the multi-person group image, or the number of times the image is not the master face but appears on the same image as the owner's face belonging to a different user identity. The person corresponding to the face whose threshold is set.
  • only one face is extracted as a master face for each image. Face recognition is performed on the master face of each image obtained according to the face recognition algorithm, and the face category corresponding to the owner face of each image is obtained.
  • the face category refers to the identity of the person corresponding to the result of the master's face recognition.
  • the electronic device obtains a single photo from the album, and the photo is an image, and the owner's face is extracted from the single photo. Because it is a single photo, there is only one face in the image. When the clarity of the single face has reached the threshold that can recognize the face, the face in the single photo is marked as the main face.
  • the identity corresponding to the owner's face is Zhang San, that is, the user identity corresponding to the owner's face is Zhang San.
  • the face of the core character is extracted and the sharpness reaches the threshold, and the face corresponding to the identity of the face can be identified, marked as the main face, and the face of the master face is recognized, and the owner is obtained.
  • the identity corresponding to the face which is the face category corresponding to the owner's face.
  • the sharpness of the face of the core person in an image does not reach the sharpness threshold of the identity corresponding to the face, continue to check whether other faces in the album have similar faces that have been marked as the face of the face. Exist, when present, mark this face and the owner's face as a type of master face. Conversely, if there is a preset number of faces in the other images in the album that are similar to the faces of the core characters in the image, then the clearest one is selected as the master face from the similar faces. Marking these similar faces as a type of master face, and performing face recognition on the master face, the identity corresponding to the master face is obtained, and the identity is the user identity corresponding to the master face, and is also the core in the image. The user identity of the person's face.
  • the preset number of times can be set to 5 times. Of course, in other embodiments, other reasonable times can be set, for example, 3 times, 4 times, 6 times, 10 times, and the like.
  • the master face may also include a plurality of, when a plurality of faces are included, one of the master faces in the image may be acquired.
  • the obtained master face may be determined according to a user's selection operation.
  • the identity of the user to which the corresponding face belongs is identified, and the same photo may be divided into an image set corresponding to the user identity of each master face.
  • the face detection can be performed in the above manner, and when the face A and the face B are both the master face, the person can be further identified.
  • the user identity A corresponding to the face A and the user identity B corresponding to the face B are simultaneously divided into the image set A corresponding to the user A and the image set B corresponding to the user.
  • determining a master face of the plurality of faces includes determining a master face based on a face area and/or a face definition occupied by each face in the image.
  • the electronic device obtains an image from a local or cloud album containing images of no human face (pure natural scenery), single-person images, and multi-person group images, including single-person images and multi-person group images.
  • human face For a single photo, there is only one face in the image. When the clarity of the single face has reached the threshold that can recognize the face, the face in the single photo is marked as the main face.
  • the master face can be determined according to a combination of any one or more of a face area and/or a face definition of the face in the image.
  • the face with the largest face area is obtained from the multi-person group image, and it is determined whether the clarity of the face conforms to the clarity threshold of the recognized face, and whether the face meets the preset condition. If yes, the face with the largest face area is used as the master face of the multi-person group image. If not, the face with the second largest face size is continuously obtained from the multi-person group image, and it is determined whether the face meets the preset condition. Repeat until the face area reaches the minimum threshold or gets the owner's face. The master face is obtained in order from the largest to the smallest, so that the accuracy of the master face can be improved.
  • the master face can also be obtained in descending order of face clarity.
  • the face with the largest face resolution is obtained from the multi-person group image, and the face area of the face is determined to meet the face area threshold of the recognized face, and whether the face meets the preset condition. When yes, the face with the largest face resolution is used as the master face of the multi-person group image. If not, the face of the face resolution second is continuously obtained from the multi-person group image, and the face is judged to be in conformity. The preset condition is repeated until the face resolution reaches the minimum definition threshold or the owner's face is acquired.
  • the master face is determined based on the face area and/or face definition occupied by each face in the image, including:
  • Operation 602 obtaining a face with the largest face area from the image, determining whether the face meets the preset condition, and if so, performing operation 604, otherwise, performing operation 606.
  • the face of the core person is obtained from the multi-person group image, and the resolution reaches a threshold value to identify the master face of the identity corresponding to the face.
  • the face with the largest face area is obtained from the multi-person group image, and then the face is determined to meet the preset condition.
  • the preset conditions include, but are not limited to, the following: determining whether the sharpness of the face reaches a threshold value that can identify the identity corresponding to the face, and when it is further determining the shooting angle and focus of the face with the largest face area, determining whether the shooting angle is It is a positive face, and it is judged whether the shooting focus is the face with the largest face area, and it is judged whether the face with the largest face area is closest to the lens.
  • the face with the largest face area is taken as the master face of the image.
  • the face of the face area is obtained from the image, and the face is determined to meet the preset condition, and the execution is repeated until the face area reaches the minimum threshold or the owner's face is acquired.
  • the face with the largest face area should be the face corresponding to the core person, and the face with the largest face area is used as the master face of the multi-person group image and marked. For the owner's face.
  • the preset condition When the judgment result is that the preset condition is not satisfied, it indicates that the face with the largest face area is not the face corresponding to the core person, and then the face whose face size is second largest is obtained from the multi-person group image, and the face is judged. Whether the pre-set condition is met, and when the judgment result satisfies the above-mentioned preset condition, the face whose face area is second should be the face corresponding to the core person, so that the face with the face area second is taken as a multi-person group photo The owner's face of the image is marked as the main face.
  • the face whose face area is second is not in accordance with the preset condition, it means that the face whose face area is second is not the face corresponding to the core person.
  • the minimum threshold or gets the owner's face. That is, a minimum threshold of a face area is preset.
  • the face that meets the preset condition is not found, then the owner's face is stopped for the image.
  • determining the master face according to the face area and/or the face definition occupied by each face in the image comprises: obtaining the face with the highest face resolution from the image, and determining whether the face conforms to the preset. Condition; if yes, the face with the highest face resolution is used as the master face of the image; if not, the face whose face is the second most clear is obtained from the image, and it is determined whether the face meets the preset condition and is repeatedly executed. Until the face resolution reaches the minimum sharpness threshold or gets the owner's face.
  • the face of the core person is obtained from the multi-person group image, and the face resolution reaches a threshold value to identify the master face of the identity corresponding to the face.
  • the face with the largest facial expression is obtained from the multi-person group image, and then the face is determined to meet the preset condition.
  • the preset conditions include, but are not limited to, the following: determining whether the sharpness of the face reaches a threshold value that can identify the identity corresponding to the face, and if it is further determining the shooting angle and focus of the face with the largest face area, determining whether the shooting angle is positive
  • the face determines whether the shooting focus is the face with the largest face area, and determines whether the face with the largest face area is closest to the lens.
  • the face with the highest face resolution should be the face corresponding to the core person, and the face with the highest face resolution is used as the master face of the multi-person group image, and Mark the face as the main face.
  • the face with the largest face resolution is not the face corresponding to the core person, and then the face with the face clarity is obtained from the multi-person group image, and the person is judged.
  • the face whose face resolution is second should be the face corresponding to the core character, so that the face with the second face of the face is used as the face Multiplayer takes a picture of the owner's face and marks it as the main face.
  • the face whose face resolution is second is not in accordance with the preset condition
  • the face whose face resolution is second is not the face corresponding to the core character.
  • the minimum definition threshold or the owner's face is acquired. That is, a minimum threshold of the face sharpness is preset.
  • a minimum threshold of the face sharpness is preset.
  • the master face since it is more difficult to determine the master face from the multi-person group image than the single-person image, it is determined whether the face area and/or the face definition are in descending order. Face, so that the master's face will not be missed. And the preset condition is set, and only the face that satisfies these preset conditions can become the master face, and the result of obtaining the master face is more accurate.
  • the operation 602 includes: obtaining a face with the largest face area from the image, and determining whether the angle and the focus of the face meet the preset condition.
  • the face of the core person is obtained from the multi-person group image, and the resolution reaches a threshold to identify the owner face of the identity corresponding to the face.
  • the face with the largest face area is obtained from the multi-person group image, and then the face is determined to meet the preset condition.
  • the preset conditions include, but are not limited to, the following: determining whether the sharpness of the face reaches a threshold value that can identify the identity corresponding to the face, and if it is further determining the shooting angle and focus of the face with the largest face area, determining whether the shooting angle is positive
  • the face determines whether the shooting focus is the face with the largest face area, and determines whether the face with the largest face area is closest to the lens. Only the face that meets these preset conditions can become the master's face, and the result of the acquired master face is more accurate.
  • the operation 504 includes: when it is recognized that the image includes multiple faces, detecting whether the remaining faces in the image correspond to existing user identities, and if so, dividing the remaining faces into corresponding existing ones. In the user identity, the user identity corresponding to the remaining faces is obtained.
  • the operation 506 includes: dividing the image into an image set corresponding to the user identity corresponding to the remaining face, and using the image set corresponding to the user identity corresponding to the remaining face as the image set.
  • the remaining faces in each image are acquired.
  • the remaining faces include the remaining faces of each image except for the faces marked as the main face and the suspected passerby face. There is no remaining face for a single-person image, but there are remaining faces for a multi-person group image.
  • the existing user identity is the user identity corresponding to the face of the core character of each image, so the remaining faces that can be separated into the existing user identity are naturally also the faces of the core characters, thereby achieving non-core
  • the faces corresponding to the characters are assigned to these user identities.
  • the album contains 1000 images, and each image extracts only one face as the master face. Then, up to 1000 master faces are generated from the 1000 images, and even if many of the 1000 images are multi-person group images, only a maximum of 1000 master faces are generated. Then, the face recognition of the 1000 master faces is performed, and the identity corresponding to the master face is obtained. The identity is the user identity corresponding to the master face. For example, after the recognition, the 1000 master faces belong to 10 user identities respectively. The remaining faces are divided into existing user identities to obtain the user identities corresponding to the remaining faces. When some of the remaining faces cannot be separated into the existing user identity, the remaining faces are marked as suspected passers-by faces.
  • the master face and the remaining faces in the image are classified by the above operations, and the corresponding user identity is assigned to those that can be classified. Then, according to the identity of the user that the master face and the remaining faces in the image can be separated, the image is divided into image sets corresponding to the user identity. Specifically, if the owner's face and the remaining faces of an image are respectively classified into three categories, the image will appear in the three categories at the same time. For example, an image consists of three faces, namely Zhang San, Li Si, and Wang Wu. Among them, the owner's face of this image is Zhang San, and the remaining two faces find a similar master face in the existing master's face, so the two remaining faces are divided into the user's identity of Li Si and Wang Wu. in. Then the image will eventually be displayed in the Zhang San image collection, and will also be displayed in the Li Si image collection, and of course the image will be displayed in the Wang Wu image collection. Because the characters corresponding to these three identities are the core figures of the owner's face.
  • the owner's face of the image is acquired, and then the face of the master is recognized by the face, and the user identity corresponding to the owner's face is obtained. Since the master's face is the face of the core character in each image, the face of the non-core character does not act as the master face, so the user identity only includes the classification of the face of the core character in the image. Then, the remaining faces are obtained from the image, the face of the remaining faces in the image is recognized, and the remaining faces are divided into the existing user identity to obtain the user identity corresponding to the remaining faces. According to the identity of the user that can be separated by the owner's face and the remaining faces in the image, the image is divided into image collections corresponding to the user's identity.
  • the image collection is distinguished by the user identity that can be separated by the master face and the remaining faces in the image. Because the user identity only includes the classification of the face of the core character in the image, naturally there will not be a large number of non-cores. A collection of images of people.
  • the remaining faces in the image are detected to correspond to the existing user identity, and if so, the remaining faces are classified into corresponding In some user identities, the user identity corresponding to the remaining faces is obtained, including:
  • the remaining faces in the image are acquired, and the remaining faces are sequentially recognized by the face.
  • the remaining faces in each image are acquired.
  • the remaining faces include the remaining faces of each image except for the faces marked as the main face and the suspected passerby face.
  • face recognition is performed on the remaining faces in each image in turn, and the identity corresponding to the remaining faces is obtained.
  • the remaining faces are divided into existing user identities to obtain the user identities corresponding to the remaining faces. When some of the remaining faces cannot be separated into the existing user identity, the remaining faces are marked as suspected passers-by faces.
  • Operation 704 determining whether the remaining faces belong to the existing user identity, and if so, performing operation 706, otherwise, performing operation 708.
  • the remaining faces are divided into user identities, and the user identity corresponding to the remaining faces is obtained.
  • Operation 708 marking the remaining faces as suspected passers-by faces.
  • the remaining faces it is determined whether there is an existing user identity in the remaining faces. According to the identified identity of the remaining faces, it is determined whether the identity corresponding to the remaining faces is the same as the existing owner's face identity. When the same, the remaining faces are assigned to the corresponding user identity, and the user identity is the remaining face. The corresponding user identity.
  • the identity corresponding to the remaining faces does not have the same identity in the existing master face, it indicates that the remaining faces do not belong to the existing user identity, and then the remaining faces are marked as suspected passers-by faces. And the suspected passerby faces of the same identity are marked as one type.
  • the remaining faces are classified into the existing user identity according to the identity result obtained by performing face recognition on the remaining faces, and the remaining faces are not newly added.
  • Increase the user's identity so naturally the face corresponding to the non-core person will not be divided into a user identity.
  • only the image collection of the core person will be displayed after the image is classified. It avoids the image collection of a large number of non-core characters in the traditional method, does not meet the actual needs of users, and reduces the effectiveness of information.
  • the above method further includes an operation of processing the suspected passerby face, the operation being performed after marking the remaining face as a suspected passerby face, including:
  • Operation 802 calculates the number of times the suspected passerby face and the owner's face belonging to different user identities appear on the same image in the album.
  • the identity corresponding to the remaining faces does not have the same identity in the existing master face, it indicates that the remaining faces do not belong to the existing user identity, and then the remaining faces are marked as suspected passers-by faces.
  • the number of times each suspected passerby face and the owner face belonging to a different user identity appear on the same image are calculated in all the images in the album, that is, the same identity is calculated.
  • the number of times the suspected passerby face was photographed with the owner's face of different identities.
  • the number of times the suspected passerby face of the identity A is photographed with the owner's face of different identities is calculated.
  • the owner's face has 10 master faces such as Zhang San, Li Si, and Wang Wu.
  • Operation 804 determining whether the number of times has reached a set threshold, and if so, performing operation 806, otherwise, performing operation 808.
  • the threshold value may be set to a minimum of 5 times when the suspected passerby face and the owner's face belonging to different user identities appear on the same image in the album. Of course, in other embodiments, other reasonable ones may be set. The number of times, for example, 6 times, 7 times, 8 times, 9 times, 10 times, and the like.
  • the suspected passerby face is added as the main face, and the user identity corresponding to the suspected passerby face is obtained.
  • Operation 808 keeping the suspected passerby face.
  • the suspected passerby face is added as the main face.
  • a set threshold for example, 5 times
  • the suspected passerby face is added as the main face.
  • the number of suspected passers-by faces with the identity A and the owner's face of different identities are taken 5 times, and the suspected passerby face with the identity A is added as the main face, and the suspected passerby is obtained.
  • the user identity corresponding to the face When it has not reached 5 times, it will remain as a suspected passerby face. Waiting for the new image in the next album to recalculate the number of times the suspected passerby face appears on the same image as the owner's face belonging to a different user identity.
  • the number of times each suspected passerby face and the owner face belonging to a different user identity appear on the same image are calculated.
  • the suspected passerby face is added as the main face, and the user identity corresponding to the suspected passerby face is obtained.
  • the suspected passerby face in the album has also been processed so that it can be added to the main face, which effectively avoids extracting only one face as the master face for each image, resulting in missing core characters. .
  • the above method further includes an operation of dividing the image set of the newly added image, and the operation may be performed according to the identity of the user that can be separated according to the owner's face and the remaining faces in the image.
  • the image is executed after being divided into the image collection corresponding to the user identity, including:
  • Operation 902 obtaining a new image.
  • the images in the album are not static, but are dynamically changed. For example, they can be added as the user takes photos continuously, or they can download photos from the cloud to the album. Or the album is originally in the cloud, the image in the cloud album has been added. Get these new images.
  • the newly added image is classified according to the image processing method before the newly added image, and the newly added image is divided into the image corresponding to the user identity according to the user identity that the owner face and the remaining face in the newly added image can be separated. set.
  • the new image is classified according to the image processing method before the newly added image. Specifically, the owner's face is first acquired from the newly added image. When the newly added image is a single-person image, the only one face in the image is acquired, and this is determined. Whether the definition of the face has reached the threshold that can recognize the face. When the definition of the face of the single person has reached a threshold value for recognizing the face, the face in the single-person photo is marked as the main face, and after the face recognition is performed on the master face, the master face is obtained. The corresponding user identity.
  • the clarity of the single face does not reach the sharpness threshold of the identity corresponding to the face, then continue to check whether other faces in the album have similar faces that have been marked as the face of the face, when present , the single person face and the owner's face are marked as a master face. Conversely, if there is a preset number of faces similar to the single face in other images in the album, then the most clear one is selected as the master face from the similar faces, and the similar persons are The face is marked as a type of master face, and the face of the master face is recognized, and the identity corresponding to the master face is obtained, and the identity is the user identity corresponding to the single face.
  • the master face is acquired from the multi-person group image in descending order of the face area, and is repeatedly executed until the face reaches the minimum threshold or the owner's face is acquired.
  • Obtain the remaining faces in the newly added image perform face recognition on the remaining faces, and divide the remaining faces into the existing user identity to obtain the user identity corresponding to the remaining faces.
  • the remaining faces are marked as suspected passers-by faces.
  • the image is divided into image collections corresponding to the user's identity.
  • Operation 906 recalculating the number of times the suspected passerby face and the owner's face belonging to different user identities appear on the same image.
  • Operation 908 determining whether the number of times has reached a set threshold, and if so, performing operation 910; otherwise, performing operation 912.
  • the suspected passerby face is added as the main face, and the user identity corresponding to the suspected passerby face is obtained, and the image containing the suspected passerby face is divided into the image set corresponding to the user identity.
  • Operation 912 remains as a suspected passerby face.
  • the set threshold When the set threshold is reached, the suspected passerby face is added as the main face, and the suspected passerby face is obtained.
  • the corresponding user identity is used to divide the image containing the suspected passerby face into the image set corresponding to the user identity.
  • the set threshold When the set threshold is not reached, it remains as a suspected passerby face.
  • the suspected passerby face is recalculated to the number of times the owner's face belonging to a different user identity appears on the same image.
  • the new image when the image is added, the new image is classified according to the image processing method before the new image is added, and the user identity that can be separated by the owner face and the remaining face in the newly added image is used.
  • the added image is divided into a collection of images corresponding to the user's identity. Then recalculate the number of times the suspected passerby face and the owner's face belonging to different user identities appear on the same image. Because of the addition of images, the number of suspected passers-by faces and the owner's face belonging to different user identities may have been updated on the same image, that is, the suspected passerby face may also become the master's face. Therefore, it is possible to update the suspected passerby face to the main face in real time by recalculating.
  • an image processing method is also provided, which is applied to the electronic device in FIG. 1 as an example, and includes:
  • the electronic device obtains the face of the image from a local or cloud album.
  • the master face is obtained in order of the face area from large to small.
  • the face with the largest face area is obtained from the multi-person group image, and it is determined whether the face meets the preset condition.
  • the face with the largest face area is used as the master face of the multi-person group image; if not, the face with the second largest face size is continuously obtained from the multi-person group image, and it is determined whether the face meets the preset condition. Repeat until the face area reaches the minimum threshold or gets the owner's face.
  • the image is divided into image sets corresponding to the user identity.
  • the above image processing method divides each image into an image set corresponding to the user's identity of the face in which the image is classified according to the user identity to which the face belongs. And acquiring an image set having the same face; identifying an age value corresponding to the face of the image in the image set; generating sorting information for the image in the image set according to the age value; generating the included image set according to the sorting information and the image set.
  • the image has a sequence of albums, and the time-stamped albums are played in time series, which is convenient for helping users to record image stories of different ages and increasing the viscosity of the user.
  • an image processing apparatus comprising:
  • the image set obtaining module 1002 is configured to acquire a face in the image; and acquire an image set having the same face in the image.
  • the age value identification module 1004 is configured to identify an age value corresponding to a face of the image in the image collection.
  • the sorting information generating module 1006 is configured to generate sorting information for the images in the image set according to the age value.
  • the album generating module 1008 is configured to generate a time-series album including images in the image collection according to the sorting information and the image set.
  • the image set obtaining module 1002 is further configured to perform face recognition on a face in the image, determine a user identity corresponding to the face in the image, and divide the image into an image set corresponding to the user identity to form an image set. A collection of images with the same face.
  • the image set obtaining module 1002 is further configured to determine a master face of the plurality of faces when the image includes multiple faces.
  • the image set acquisition module 1002 is further configured to determine a master face based on a face area and/or a face definition occupied by each face in the image.
  • the image set obtaining module 1002 is further configured to obtain a face with the largest face area from the image, and determine whether the face meets the preset condition; if yes, the face with the largest face area is used as the image owner. Face; if not, continue to obtain the face whose face area is the second largest from the image, determine whether the face meets the preset condition, and repeat until the face area reaches the minimum area threshold or obtains the owner's face.
  • the image set obtaining module 1002 is further configured to obtain a face with the highest face resolution from the image, and determine whether the face meets the preset condition; if yes, the face with the highest face resolution is used as the image.
  • the master's face if not, continue to obtain the face whose face is the second most clear from the image, determine whether the face meets the preset condition, and repeat until the face resolution reaches the minimum sharpness threshold or obtains the master's face.
  • the ranking information generating module 1006 is further configured to generate, when the age values of the faces are the same among the plurality of images in the same image set, the ranking information for the plurality of images according to the generation time of the plurality of images.
  • yet another image processing apparatus is provided, the apparatus further comprising:
  • the playing module 1010 is configured to play the album with time series in time series, the sequence includes sorting by the age value and/or sorting in reverse order, and the album with the time series includes any one of a slide, a movie or an album.
  • the image collection acquisition module 1002 is further for acquiring images from a multi-user family shared album in the local machine and/or other devices.
  • each module in the above image processing apparatus is for illustrative purposes only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the operations of the image processing methods provided by the various embodiments described above.
  • An electronic device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor performing the operations of the image processing methods provided by the various embodiments described above when executing the computer program.
  • the embodiment of the present application also provides a computer program product.
  • a computer program product comprising instructions which, when run on a computer, cause the computer to perform the operations of the image processing methods provided by the various embodiments described above.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit, and the image processing circuit can be implemented by using hardware and/or software components, and can include various processing units defining an ISP (Image Signal Processing) pipeline.
  • Figure 12 is a schematic illustration of an image processing circuit in one embodiment. As shown in FIG. 12, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
  • the image processing circuit includes an ISP processor 1240 and a control logic 1250.
  • the image data captured by imaging device 1210 is first processed by ISP processor 1240, which analyzes the image data to capture image statistics that can be used to determine and/or control one or more control parameters of imaging device 1210.
  • Imaging device 1210 can include a camera having one or more lenses 1212 and image sensors 1214.
  • Image sensor 1214 may include a color filter array (such as a Bayer filter) that may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 1214 and provide a set of primitives that may be processed by ISP processor 1240 Image data.
  • a sensor 1220 such as a gyroscope, can provide acquired image processing parameters, such as anti-shake parameters, to the ISP processor 1240 based on the sensor 1220 interface type.
  • the sensor 1220 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Standard Mobile Imaging Architecture
  • image sensor 1214 can also transmit raw image data to sensor 1220, sensor 1220 can provide raw image data to ISP processor 1240 based on sensor 1220 interface type, or sensor 1220 can store raw image data into image memory 1230.
  • the ISP processor 1240 processes the raw image data pixel by pixel in a variety of formats.
  • each image pixel may have a bit depth of 12, 10, 12, or 14 bits, and the ISP processor 1240 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • ISP processor 1240 can also receive image data from image memory 1230.
  • sensor 1220 interface transmits raw image data to image memory 1230, which is then provided to ISP processor 1240 for processing.
  • Image memory 1230 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • the ISP processor 1240 can perform one or more image processing operations, such as time domain filtering.
  • the processed image data can be sent to image memory 1230 for additional processing prior to being displayed.
  • the ISP processor 1240 can also receive processing data from the image memory 1230, processing the processed data in image data in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data can be output to display 1280 for viewing by a user and/or further processed by a graphics engine or GPU (Graphics Processing Unit). Additionally, the output of ISP processor 1240 can also be sent to image memory 1230, and display 1280 can read image data from image memory 1230.
  • image memory 1230 can be configured to implement one or more frame buffers. Additionally, the output of ISP processor 1240 can be sent to encoder/decoder 1270 to encode/decode image data. The encoded image data can be saved and decompressed before being displayed on the display 1280 device.
  • the ISP processor 1240 processes the image data by performing VFE (Video Front End) processing and CPP (Camera Post Processing) processing on the image data.
  • VFE processing of the image data may include correcting the contrast or brightness of the image data, modifying the digitally recorded illumination state data, performing compensation processing on the image data (such as white balance, automatic gain control, gamma correction, etc.), and performing image data. Filter processing, etc.
  • CPP processing of image data may include scaling the image, providing a preview frame and a recording frame to each path. Among them, CPP can use different codecs to process preview frames and record frames.
  • the image data processed by the ISP processor 1240 can be sent to the beauty module 1260 to perform a cosmetic process on the image before being displayed.
  • the beauty module 1260 can process the beauty of the image data, including: whitening, freckle, dermabrasion, face-lifting, acne, eye enlargement, and the like.
  • the beauty module 1260 can be a CPU (Central Processing Unit), a GPU, a coprocessor, or the like in the mobile terminal.
  • the processed data of the beauty module 1260 can be sent to the encoder/decoder 1270 to encode/decode the image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 1280 device.
  • the beauty module 1260 can also be located between the encoder/decoder 1270 and the display 1280, that is, the beauty module performs cosmetic processing on the imaged image.
  • the encoder/decoder 1270 described above may be a CPU, a GPU, a coprocessor, or the like in a mobile terminal.
  • the statistics determined by the ISP processor 1240 can be sent to the control logic 1250 unit.
  • the statistical data may include image sensor 1214 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 1212 shading correction, and the like.
  • Control logic 1250 can include a processor and/or a microcontroller that executes one or more routines (such as firmware) that can determine control parameters of imaging device 1210 and ISP processing based on received statistical data.
  • Control parameters of the device 1240 may include sensor 1220 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 1212 control parameters (eg, focus or zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 1212 shading correction parameters.
  • the image processing method as above can be realized by using the image processing technique of FIG.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which acts as an external cache.
  • RAM is available in a variety of forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronization.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM dual data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM Link (Synchlink) DRAM
  • SLDRAM Memory Bus
  • Rambus Direct RAM
  • RDRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Abstract

L'invention concerne un procédé de traitement d'image, comprenant les étapes consistant : à acquérir des visages dans des images ; à acquérir, parmi les images, un ensemble d'images présentant les mêmes visages ; à reconnaître des valeurs d'âge correspondant aux visages dans les images dans l'ensemble d'images ; en fonction des valeurs d'âge, à générer des informations de tri pour les images dans l'ensemble d'images ; et en fonction des informations de tri et de l'ensemble d'images, à générer un album photo de séquence temporelle comprenant les images dans l'ensemble d'images.
PCT/CN2018/116592 2017-12-13 2018-11-21 Procédé de traitement d'image, appareil, support d'informations lisible par ordinateur et dispositif électronique WO2019114508A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711331605.9A CN108108415B (zh) 2017-12-13 2017-12-13 图像处理方法、装置、存储介质和电子设备
CN201711331605.9 2017-12-13

Publications (1)

Publication Number Publication Date
WO2019114508A1 true WO2019114508A1 (fr) 2019-06-20

Family

ID=62215835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116592 WO2019114508A1 (fr) 2017-12-13 2018-11-21 Procédé de traitement d'image, appareil, support d'informations lisible par ordinateur et dispositif électronique

Country Status (2)

Country Link
CN (1) CN108108415B (fr)
WO (1) WO2019114508A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112035685A (zh) * 2020-08-17 2020-12-04 中移(杭州)信息技术有限公司 相册视频生成方法、电子设备和存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108415B (zh) * 2017-12-13 2020-07-21 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质和电子设备
CN109582811B (zh) * 2018-12-17 2021-08-31 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN110009646B (zh) * 2019-04-15 2023-08-18 天意有福科技股份有限公司 一种电子相册的生成方法、装置、电子设备及存储介质
CN110147461A (zh) * 2019-04-30 2019-08-20 维沃移动通信有限公司 图像显示方法、装置、终端设备及计算机可读存储介质
CN112131915B (zh) * 2019-06-25 2023-03-24 杭州海康威视数字技术股份有限公司 人脸考勤系统以及摄像机和码流设备
CN110490162A (zh) * 2019-08-23 2019-11-22 北京搜狐新时代信息技术有限公司 基于人脸识别解锁功能显示人脸变化的方法、装置和系统
CN111835987A (zh) * 2020-06-08 2020-10-27 广东以诺通讯有限公司 一种基于人脸识别的视频生成方法
CN112163121B (zh) * 2020-11-03 2021-03-23 万得信息技术股份有限公司 一种基于大数据的视频内容信息智能分析处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009201041A (ja) * 2008-02-25 2009-09-03 Oki Electric Ind Co Ltd コンテンツ検索装置およびその表示方法
CN105117207A (zh) * 2015-07-27 2015-12-02 小米科技有限责任公司 相册创建方法及装置
CN105531741A (zh) * 2013-09-26 2016-04-27 富士胶片株式会社 摄像图像的主要脸部图像决定装置和其控制方法及其控制程序
CN108108415A (zh) * 2017-12-13 2018-06-01 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质和电子设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751901B (zh) * 2009-12-18 2014-09-24 康佳集团股份有限公司 一种动感电子相册播放方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009201041A (ja) * 2008-02-25 2009-09-03 Oki Electric Ind Co Ltd コンテンツ検索装置およびその表示方法
CN105531741A (zh) * 2013-09-26 2016-04-27 富士胶片株式会社 摄像图像的主要脸部图像决定装置和其控制方法及其控制程序
CN105117207A (zh) * 2015-07-27 2015-12-02 小米科技有限责任公司 相册创建方法及装置
CN108108415A (zh) * 2017-12-13 2018-06-01 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质和电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112035685A (zh) * 2020-08-17 2020-12-04 中移(杭州)信息技术有限公司 相册视频生成方法、电子设备和存储介质

Also Published As

Publication number Publication date
CN108108415B (zh) 2020-07-21
CN108108415A (zh) 2018-06-01

Similar Documents

Publication Publication Date Title
WO2019114508A1 (fr) Procédé de traitement d'image, appareil, support d'informations lisible par ordinateur et dispositif électronique
CN107766831B (zh) 图像处理方法、装置、移动终端和计算机可读存储介质
CN107730444B (zh) 图像处理方法、装置、可读存储介质和计算机设备
CN107886484B (zh) 美颜方法、装置、计算机可读存储介质和电子设备
CN110334635B (zh) 主体追踪方法、装置、电子设备和计算机可读存储介质
CN107945135B (zh) 图像处理方法、装置、存储介质和电子设备
CN107862653B (zh) 图像显示方法、装置、存储介质和电子设备
CN107862658B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
US20190362171A1 (en) Living body detection method, electronic device and computer readable medium
WO2019233392A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support d'informations lisible par ordinateur
CN110580428A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
US20050200722A1 (en) Image capturing apparatus, image capturing method, and machine readable medium storing thereon image capturing program
US8009204B2 (en) Image capturing apparatus, image capturing method, image processing apparatus, image processing method and computer-readable medium
CN107368806B (zh) 图像矫正方法、装置、计算机可读存储介质和计算机设备
CN107820017B (zh) 图像拍摄方法、装置、计算机可读存储介质和电子设备
CN107993209B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN109712177B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
WO2019233260A1 (fr) Procédé et appareil d'envoi d'informations de publicité, support d'informations, et dispositif électronique
CN107622497B (zh) 图像裁剪方法、装置、计算机可读存储介质和计算机设备
CN107424117B (zh) 图像美颜方法、装置、计算机可读存储介质和计算机设备
WO2019223513A1 (fr) Procédé de reconnaissance d'image, dispositif électronique et support de stockage
CN109035147B (zh) 图像处理方法及装置、电子装置、存储介质和计算机设备
CN109242794B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2017193796A1 (fr) Procédé et appareil de traitement de photographie pour montre intelligente
CN107844764B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18889182

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18889182

Country of ref document: EP

Kind code of ref document: A1