WO2019105237A1 - Procédé de traitement d'image, dispositif informatique et support d'informations lisible par ordinateur - Google Patents

Procédé de traitement d'image, dispositif informatique et support d'informations lisible par ordinateur Download PDF

Info

Publication number
WO2019105237A1
WO2019105237A1 PCT/CN2018/115675 CN2018115675W WO2019105237A1 WO 2019105237 A1 WO2019105237 A1 WO 2019105237A1 CN 2018115675 W CN2018115675 W CN 2018115675W WO 2019105237 A1 WO2019105237 A1 WO 2019105237A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
images
image
face images
generating
Prior art date
Application number
PCT/CN2018/115675
Other languages
English (en)
Chinese (zh)
Inventor
陈德银
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019105237A1 publication Critical patent/WO2019105237A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method, a computer device, and a computer readable storage medium.
  • smart computer devices With the rapid development of smart computer devices, more and more users are taking pictures with smart computer devices.
  • the processing functions of images in intelligent computer devices are becoming more and more comprehensive and diverse.
  • smart computer devices can perform color adjustment, brightness adjustment, contrast adjustment, saturation adjustment, etc. on images, and remove some areas from the image. .
  • An embodiment of the present application provides an image processing method, a computer device, and a computer readable storage medium.
  • An image processing method comprising:
  • Face recognition is performed on the image to be processed to obtain a face image
  • the face information includes a face area, a face position, and a face angle
  • the plurality of face images are continuous shooting images, generating a motion map according to the plurality of face images.
  • a computer device comprising a memory and a processor, the memory storing computer readable instructions, the instructions being executed by the processor, causing the processor to perform the following operations:
  • Face recognition is performed on the image to be processed to obtain a face image
  • the face information includes a face area, a face position, and a face angle
  • the plurality of face images are continuous shooting images, generating a motion map according to the plurality of face images.
  • a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor to:
  • Face recognition is performed on the image to be processed to obtain a face image
  • the face information includes a face area, a face position, and a face angle
  • the plurality of face images are continuous shooting images, generating a motion map according to the plurality of face images.
  • 1 is a flow chart of an image processing method in an embodiment
  • FIG. 2 is a flow chart of an image processing method in another embodiment
  • FIG. 3 is a flow chart of an image processing method in another embodiment
  • FIG. 5 is a schematic diagram of a plurality of face image generation moving images in an embodiment
  • Figure 6 is a block diagram showing the structure of an image processing apparatus in an embodiment
  • Figure 7 is a block diagram showing the structure of an image processing apparatus in another embodiment
  • Figure 8 is a block diagram showing the structure of an image processing apparatus in another embodiment
  • FIG. 9 is a block diagram showing a part of a structure of a mobile phone related to a computer device according to an embodiment of the present application.
  • an image processing method includes:
  • face recognition is performed on the image to be processed, and a face image is acquired.
  • the computer device may use the face recognition algorithm to perform face recognition on the image to be processed, and detect whether there is a face in the image to be processed. If there is a face in the image to be processed, the image to be processed is a face image.
  • the image to be processed may be an image captured by a computer device, an image stored by the computer device, and an image downloaded by the computing device via a data network or a wireless local area network.
  • Operation 104 Acquire face information of a face image, and the face information includes a face area, a face position, and a face angle.
  • the computer device can obtain the face information of the face image.
  • the above face information includes: face area, face position, and face angle.
  • the above-mentioned face area that is, the face area occupies the area of the image, and can be represented by the number of pixels of the face area or the proportion of the face area to the image.
  • the above-mentioned face position is the position of the face area in the image, and can be represented by the pixel coordinates of the face area, for example, the pixels of the third row and the third column.
  • the face angle refers to the angle of rotation of the face relative to the standard face in three-dimensional space, and the above-mentioned angle of rotation can be represented by three mutually perpendicular in-plane angles in the space rectangular coordinate system.
  • the computer device can obtain the face in each plane relative to the standard face.
  • the offset angle of the face relative to the standard face in the above three planes is the face angle.
  • the standard face mentioned above is a face that is pre-stored by the computer device, and can be a face that is synthesized by the computer device or a face set by the user.
  • the above standard face may be a face in an image obtained by the user who is photographing the camera.
  • the computer device may filter the plurality of faces by using face information of the plurality of faces.
  • the method for the computer device to screen the plurality of faces may include at least one of the following methods:
  • the computer device separately obtains the face area of the plurality of faces, obtains the maximum face area, and selects the face corresponding to the face area within the preset ratio range based on the maximum face area. For example, a face whose face area is larger than 80% of the maximum face area is selected.
  • the computer device detects whether the face position of the face is within the preset position, and the preset position may be a central area of the image.
  • the computer device can detect whether the face area is within the central area, and then select a face whose face area is within the central area. If a part of the area in the face area is in the central area, the computer device may calculate an area ratio of the face area in the central area to the total face area. If the ratio is greater than a preset threshold, the face area is selected. For example, if the area of the face region falling into the center area accounts for more than 60% of the total area of the face area, the above-mentioned face area is selected.
  • the computer device can receive a user's selection instruction for the face in the face image, and select a face from the face image according to the above selection instruction.
  • the face information of the plurality of face images is compared to determine whether the plurality of face images are continuous shooting images.
  • the computer device may separately compare the face information of the plurality of face images to detect whether the plurality of face images are continuous shooting images.
  • a continuous shooting image refers to an image obtained by continuously capturing the same scene from the same direction and at the same angle.
  • the operation of comparing the face information of the plurality of face images by the computer device includes:
  • the computer device acquires the face identifier corresponding to the face in the face image.
  • the above-mentioned face identifier is a character string for uniquely identifying a face, and may be a number, a letter, a symbol, or the like.
  • the computer device can compare two images in multiple face images, and compare the face information of the two images, including:
  • the computer device can intercept the rectangular frame including the face, and after the two images are overlapped and aligned, if the face in the other image also falls into the rectangular frame, the two images are in the two images.
  • the face of the face is the same.
  • the size of the rectangular frame can be adjusted according to the size of the face.
  • the face areas of the faces corresponding to the same face identifier in the two images are within a first threshold and the face positions are the same, it is detected whether the face angles are within a second threshold.
  • the time difference of the continuous shooting image is usually very short, for example, 0.2 seconds. Therefore, the change value of the face angle in the two adjacent images of the continuous shooting image taken when the face is rotated is small.
  • the computer device can separately obtain the face angles of the two images, and the face angles are the angles of the three planes in the space rectangular coordinate system.
  • the computer device compares whether the angular differences of the above three planes are within a second threshold, for example 10°. If the face angle is within the second threshold, it can be determined that the two face images are continuous shooting images.
  • the computer device can recognize whether the plurality of images are continuous shooting images according to the two-two images as continuous shooting images.
  • the A image and the B image are continuous shooting images
  • the B image and the C image are continuous shooting images
  • the A image, the B image, and the C image are continuous shooting images.
  • the computer device can generate the motion map according to the plurality of face images.
  • the above method for generating an animation is to continuously play a plurality of face images to obtain a motion picture.
  • a computer device can generate a GIF (Graphics Interchange Format) image from a plurality of face images.
  • the computer device can store the obtained animation in a folder of a preset path, and the folder of the preset path can be preset by the computer device or set by the user, for example, a file in a system album folder and a third-party application. In the folder like QQ's emoticon folder.
  • the computer device can acquire a continuous image generation moving image of the face image, which does not require manual operation by the user, simplifies the operation operation of the user, and improves the generation efficiency of the moving image.
  • the face area of the face corresponding to the same face identifier in the two images is within a first threshold, and the face position is Similarly, after the face angle is within the second threshold, it is also possible to detect whether the difference between the brightness values of each of the two images in the plurality of face images is within a third threshold range. If the difference between the luminance values of the two images is within the third threshold range, it is determined that the two images are continuous shooting images. Since the continuous shooting image shooting time is short and the brightness value of the image changes little, the difference between the image brightness values can further determine whether the image is a continuous shooting image.
  • generating a motion map from the plurality of face images includes at least one of the following:
  • a plurality of face images are generated in accordance with the storage order of the plurality of face images.
  • a plurality of face images are generated in accordance with the shooting time sequence of the plurality of face images.
  • a plurality of face images are generated in accordance with the user's selection order.
  • the plurality of face images can be continuously played in a specified order.
  • the order specified above may be a storage order of a plurality of face images, that is, the computer device generates a moving image by continuously playing a plurality of face images in a storage order from first to last.
  • the computer device can also acquire the shooting time of each face image in the plurality of face images, and continuously play the plurality of face images in the order from the morning to the night according to the shooting time, thereby generating the motion picture.
  • the computer device also receives the user's selection order of the plurality of face images, and continuously plays the plurality of face images in the user's selection order to obtain the motion picture.
  • the computer device can also obtain the face angle in each image of the plurality of face images, and the computer device can select the image with the earliest shooting time as the starting image, and then change the angle of the other image relative to the starting image face angle. Arrange in a large order, and play the start image and other images in sequence to generate a motion picture.
  • the computer device can generate a moving image according to a plurality of sequences to generate a moving image, and the method for generating the moving image is more diversified, which is beneficial to improving user stickiness.
  • generating a motion map from the plurality of face images includes at least one of the following:
  • the computer device is pre-configured with a first time interval, and the first time interval may be a value set by the computer device or a preset value of the user, and the computer device may sequentially play multiple face images to generate a motion image according to the first time interval.
  • a computer device sequentially plays a plurality of face images at intervals of 0.5 seconds.
  • the computer device can also pre-store the correspondence between the number of images and the playing time interval, and the computer device can determine the second time interval according to the number of the plurality of face images.
  • the above correspondence may be a linear relationship or a non-linear relationship.
  • the second time interval is 0.5 seconds; when the number of sheets of the plurality of face images is five, the second time interval is 1 second.
  • the computer device can also receive a playing time interval set by the user for each face image, that is, the user can set a corresponding third time interval for each face image, wherein the third time interval corresponding to the different images can be the same or different. .
  • the computer then sequentially plays the plurality of face images according to the third time interval corresponding to the face image to obtain a motion picture.
  • the computer device can play multiple face images according to multiple time intervals, that is, the manner of generating the motion picture can be diversified, which is beneficial to improve user stickiness.
  • generating a motion image according to the plurality of face images includes: extracting a face region image from the plurality of face images respectively, and continuously playing the face region image in the plurality of face images to obtain a motion map.
  • the computer device may further identify the face region in the face image, and extract the face region image according to the face contour of the face region.
  • the computer device can acquire the face region through the skin color recognition, and determine the face contour by the color difference between the face region and the background region.
  • the computer device can also recognize the face region in the face image through the machine learning model, recognize the face contour of the face region, and extract the face region image according to the face contour.
  • the computer device may extract the face region image from the face image, and then sequentially play the face region image to obtain a motion picture, that is, the computer device may remove the background region in the image when generating the motion picture, only According to the face area generated in the image, the method of generating the moving picture is more suitable for the user's needs.
  • the method further includes:
  • the face identifier corresponding to the face image is identified, and the face image is clustered according to the face identifier to generate a face map corresponding to the face identifier.
  • the face information of the plurality of face images is compared, and determining whether the plurality of face images are continuous shooting images comprises: comparing face information of multiple face images in the same image set to determine multiple face faces. Whether the image is a continuous shooting image.
  • the computer device can obtain a face identifier corresponding to the master face in the face image, and the master face is the main face in the image.
  • the master face may include a face in the image preset area, for example, a face in the center of the image; the master face may also include a face whose face area is larger than a specified value in the image, for example, in an image. A face whose face area is larger than 10% of the image area.
  • the computer device may cluster the plurality of face images according to the face identifier corresponding to the owner's face, and generate the atlas of the face corresponding to the face identifier.
  • the computer device may select multiple images from one image to detect whether the plurality of face images are continuous shooting images, that is, the computer device sets multiple faces in the same image set.
  • the face information of the image is compared to determine whether the plurality of face images are continuous shooting images.
  • the method for detecting whether a plurality of images are continuous shooting images is the same as the method in operation 206, and details are not described herein again.
  • the computer device selects multiple face images from the face map set corresponding to the same face identifier, and determines whether the plurality of face images are continuous shooting images, that is, the computer device can be from the same face.
  • a plurality of face images are collectively selected to identify whether the plurality of face images are continuous shooting images, so that the detected continuous shooting images are more accurate.
  • the method further includes:
  • the face image of the face map is sent to the computer device corresponding to the contact.
  • the computer device can find contact information corresponding to the face identifier, including:
  • the computer device obtains the tag information of the face in the face image input by the user, the tag information may be the name of the person corresponding to the face, and the computer device searches for the name of the person corresponding to the face in the stored contact, if the contact has been stored If there is a person name corresponding to the face in the person, the computer device acquires the contact corresponding to the face.
  • the computer device can also obtain the avatar corresponding to the stored contact, and the computer device matches the face corresponding to the face identifier with the avatar corresponding to the stored contact, and if the matching is successful, the contact is the face region. Corresponding contact.
  • the computer device After obtaining the contact corresponding to the face identifier, the computer device can find whether the contact corresponding to the face identifier has stored contact information.
  • the above contact information may be a mobile phone number, a landline number, a social account number, and the like.
  • the computer device sends the face map corresponding to the face identifier to the computer device corresponding to the contact.
  • the computer device can send the face map corresponding to the face to the corresponding contact, without the user manually sending the image one by one, simplifying the operation operation of the user, and the method of sharing the image is more intelligent and more Meet the needs of users.
  • an image processing method includes:
  • Operation 402 performing face recognition on the image to be processed, and acquiring a face image.
  • Operation 404 acquiring a plurality of face images selected by the user, and generating a motion image of the plurality of face images according to the user's selection order.
  • the computer device can also acquire a plurality of face images selected by the user, and sequentially play the plurality of face images according to the order in which the user selects the plurality of face images, that is, the computer device can be more according to the user's selection order.
  • a face image is generated to generate a motion picture.
  • the plurality of face images selected by the user may be a continuous shooting image or a non-continuous shooting image; the plurality of face images selected by the user may be the face images corresponding to the same face identifier, or may be different faces. Identify the corresponding face image.
  • the computer device can directly acquire multiple images selected by the user, and generate a motion image according to multiple images selected by the user, and the method for generating the motion image is simple and fast, and fits the user's needs.
  • FIG. 5 is a schematic diagram of a plurality of face image generation motion diagrams in one embodiment.
  • the image 502, the image 504, the image 506, the image 508, and the image 510 are all face images, and the above five face images are continuous shooting images.
  • the computer device can separately acquire the shooting moments of the above five facial images, wherein the image 502 is taken at 10:25:06 on October 28, 2017, and the image 504 is taken at 10:25:07 on October 28, 2017. Image 506 is taken at 10:25:08 on October 28, 2017, image 508 is taken at 10:25:09 on October 28, 2017, and image 510 is taken at 10:25 on October 28, 2017: 10.
  • the computer device sorts the five face images into the image 502, the image 504, the image 506, the image 508, and the image 510 in the order of the shooting time.
  • the computer device can continuously play the above five face images in the order of sorting, and generate an animation.
  • the computer device can continuously play five face images according to the preset first time interval, for example, five face images are continuously played according to a time interval of 0.5 seconds, that is, the image 502
  • the time interval from image 504 is t1 is 0.5 seconds
  • the time interval between image 504 and image 506 is t2 is 0.5 seconds
  • the time interval between image 506 and image 508 is t3 is 0.5 seconds
  • image 506 and image The time interval between 508 is t4 of 0.5 seconds.
  • FIG. 6 is a block diagram showing the structure of an image processing apparatus in an embodiment. As shown in FIG. 6, an image processing apparatus includes:
  • An identification module 602 configured to perform face recognition on the image to be processed, and obtain a face image
  • the obtaining module 604 is configured to obtain face information of the face image, where the face information includes a face area, a face position, and a face angle;
  • the comparison module 606 is configured to compare the face information of the plurality of face images to determine whether the plurality of face images are continuous shooting images;
  • the processing module 608 is configured to generate a motion map according to the plurality of face images if the plurality of face images are continuous shooting images.
  • the processing module 608 generates a motion map according to the plurality of facial images, including at least one of the following:
  • a plurality of face images are generated in accordance with the user's selection order.
  • the processing module 608 generates a motion map according to the plurality of facial images, including at least one of the following:
  • the processing module 608 generates a motion image according to the plurality of face images, including: extracting a face region image from the plurality of face images, and continuously playing the face region image in the plurality of face images to obtain a motion map. .
  • the processing module 608 is further configured to acquire a plurality of face images selected by the user, and generate a motion image by using the plurality of face images according to the user's selection order.
  • FIG. 7 is a block diagram showing the structure of an image processing apparatus in another embodiment.
  • an image processing apparatus includes: an identification module 702, an acquisition module 704, a comparison module 706, a processing module 708, and a clustering module 710.
  • the identification module 702, the acquisition module 704, the comparison module 706, and the processing module 708 have the same functions as the corresponding modules in FIG. 6.
  • the identification module 702 is further configured to identify a face identifier corresponding to the face image
  • the clustering module 710 is configured to cluster the face image according to the face identifier, and generate a face map corresponding to the face identifier;
  • the comparison module 706 compares the face information of the plurality of face images, and determining whether the plurality of face images are continuous shooting images comprises: comparing face information of the plurality of face images in the same image set to determine multiple faces Whether the image is a continuous shooting image.
  • FIG. 8 is a block diagram showing the structure of an image processing apparatus in another embodiment.
  • an image processing apparatus includes: an identification module 802, an acquisition module 804, a comparison module 806, a processing module 808, and a transmission module 810.
  • the identification module 802, the acquisition module 804, the comparison module 808, and the processing module 808 have the same functions as the corresponding modules in FIG.
  • the obtaining module 804 is further configured to: if the contact information corresponding to the face identifier is stored in the computer device, obtain a face map corresponding to the face identifier;
  • the sending module 810 is configured to send the face image of the face map to the computer device corresponding to the contact.
  • each module in the above image processing apparatus is for illustrative purposes only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
  • the embodiment of the present application also provides a computer readable storage medium.
  • One or more non-transitory computer readable storage media containing computer executable instructions that, when executed by one or more processors, cause the processor to:
  • the face to be processed is subjected to face recognition to obtain a face image.
  • the face information includes a face area, a face position, and a face angle.
  • generating a motion map from the plurality of face images includes at least one of the following:
  • a plurality of face images are generated in accordance with the storage order of the plurality of face images.
  • a plurality of face images are generated in accordance with the shooting time sequence of the plurality of face images.
  • a plurality of face images are generated in accordance with the user's selection order.
  • generating a motion map from the plurality of face images includes at least one of the following:
  • generating a motion image according to the plurality of face images includes: extracting a face region image from the plurality of face images respectively, and continuously playing the face region image in the plurality of face images to obtain a motion map.
  • the method further includes: identifying a face identifier corresponding to the face image, and clustering the face image according to the face identifier to generate a face map corresponding to the face identifier.
  • Comparing the face information of the plurality of face images, determining whether the plurality of face images are continuous shooting images comprises: comparing face information of the plurality of face images in the same image set to determine whether the plurality of face images are Continuous shooting images.
  • the method further includes: if the contact information corresponding to the face identifier is stored in the computer device, obtaining a face map corresponding to the face identifier. Send the face image of the face map to the computer device corresponding to the contact.
  • the method further includes: acquiring a plurality of face images selected by the user, and generating a plurality of face images according to the user's selection order.
  • a computer program product comprising instructions that, when run on a computer, cause the computer to:
  • the face to be processed is subjected to face recognition to obtain a face image.
  • the face information includes a face area, a face position, and a face angle.
  • generating a motion map from the plurality of face images includes at least one of the following:
  • a plurality of face images are generated in accordance with the storage order of the plurality of face images.
  • a plurality of face images are generated in accordance with the shooting time sequence of the plurality of face images.
  • a plurality of face images are generated in accordance with the user's selection order.
  • generating a motion map from the plurality of face images includes at least one of the following:
  • generating a motion image according to the plurality of face images includes: extracting a face region image from the plurality of face images respectively, and continuously playing the face region image in the plurality of face images to obtain a motion map.
  • the method further includes: identifying a face identifier corresponding to the face image, and clustering the face image according to the face identifier to generate a face map corresponding to the face identifier.
  • Comparing the face information of the plurality of face images, determining whether the plurality of face images are continuous shooting images comprises: comparing face information of the plurality of face images in the same image set to determine whether the plurality of face images are Continuous shooting images.
  • the method further includes: if the contact information corresponding to the face identifier is stored in the computer device, obtaining a face map corresponding to the face identifier. Send the face image of the face map to the computer device corresponding to the contact.
  • the method further includes: acquiring a plurality of face images selected by the user, and generating a plurality of face images according to the user's selection order.
  • the embodiment of the present application also provides a computer device. As shown in FIG. 9 , for the convenience of description, only the parts related to the embodiments of the present application are shown. If the specific technical details are not disclosed, please refer to the method part of the embodiment of the present application.
  • the computer device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking a computer device as a mobile phone as an example. :
  • FIG. 9 is a block diagram showing a part of a structure of a mobile phone related to a computer device according to an embodiment of the present application.
  • the mobile phone includes: a radio frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, and a processor 980.
  • RF radio frequency
  • RF radio frequency
  • memory 920 includes: a radio frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, and a processor 980.
  • WiFi wireless fidelity
  • power supply 990 and other components.
  • the RF circuit 910 can be used for receiving and transmitting signals during the transmission and reception of information or during a call.
  • the downlink information of the base station can be received and processed by the processor 980.
  • the uplink data can also be sent to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 910 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General
  • the memory 920 can be used to store software programs and modules, and the processor 980 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 920.
  • the memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application required for at least one function (such as an application of a sound playing function, an application of an image playing function, etc.);
  • the data storage area can store data (such as audio data, address book, etc.) created according to the use of the mobile phone.
  • memory 920 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 930 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the handset 900.
  • the input unit 930 may include a touch panel 931 and other input devices 932.
  • the touch panel 931 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 931 or near the touch panel 931. Operation) and drive the corresponding connection device according to a preset program.
  • the touch panel 931 can include two portions of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 980 is provided and can receive commands from the processor 980 and execute them.
  • the touch panel 931 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 930 may also include other input devices 932.
  • other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.).
  • the display unit 940 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit 940 can include a display panel 941.
  • the display panel 941 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 931 can cover the display panel 941. When the touch panel 931 detects a touch operation on or near it, the touch panel 931 transmits to the processor 980 to determine the type of the touch event, and then the processor 980 is The type of touch event provides a corresponding visual output on display panel 941.
  • touch panel 931 and the display panel 941 are used as two independent components to implement the input and input functions of the mobile phone in FIG. 9, in some embodiments, the touch panel 931 and the display panel 941 may be integrated. Realize the input and output functions of the phone.
  • the handset 900 can also include at least one type of sensor 950, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 941 according to the brightness of the ambient light, and the proximity sensor may close the display panel 941 and/or when the mobile phone moves to the ear. Or backlight.
  • the motion sensor may include an acceleration sensor, and the acceleration sensor can detect the magnitude of the acceleration in each direction, and the magnitude and direction of the gravity can be detected at rest, and can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching), and vibration recognition related functions (such as Pedometer, tapping, etc.; in addition, the phone can also be equipped with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors.
  • the acceleration sensor can detect the magnitude of the acceleration in each direction, and the magnitude and direction of the gravity can be detected at rest, and can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching), and vibration recognition related functions (such as Pedometer, tapping, etc.; in addition, the phone can also be equipped with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors.
  • Audio circuitry 960, speaker 961, and microphone 962 can provide an audio interface between the user and the handset.
  • the audio circuit 960 can transmit the converted electrical data of the received audio data to the speaker 961, and convert it into a sound signal output by the speaker 961.
  • the microphone 962 converts the collected sound signal into an electrical signal, and the audio circuit 960 After receiving, it is converted into audio data, and after being processed by the audio data output processor 980, it can be sent to another mobile phone via the RF circuit 910, or the audio data can be output to the memory 920 for subsequent processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 970, which provides users with wireless broadband Internet access.
  • FIG. 9 shows the WiFi module 970, it can be understood that it does not belong to the essential configuration of the mobile phone 900 and can be omitted as needed.
  • the processor 980 is the control center of the handset, which connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 920, and invoking data stored in the memory 920, executing The phone's various functions and processing data, so that the overall monitoring of the phone.
  • processor 980 can include one or more processing units.
  • the processor 980 can integrate an application processor and a modem processor, wherein the application processor primarily processes an operating system, a user interface, an application, and the like; the modem processor primarily processes wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 980.
  • the handset 900 also includes a power source 990 (such as a battery) that supplies power to the various components.
  • a power source 990 such as a battery
  • the power source can be logically coupled to the processor 980 via a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the handset 900 can also include a camera, a Bluetooth module, and the like.
  • the processor 980 included in the mobile terminal performs the following operations when executing the computer program stored in the memory:
  • the face to be processed is subjected to face recognition to obtain a face image.
  • the face information includes a face area, a face position, and a face angle.
  • generating a motion map from the plurality of face images includes at least one of the following:
  • a plurality of face images are generated in accordance with the storage order of the plurality of face images.
  • a plurality of face images are generated in accordance with the shooting time sequence of the plurality of face images.
  • a plurality of face images are generated in accordance with the user's selection order.
  • generating a motion map from the plurality of face images includes at least one of the following:
  • generating a motion image according to the plurality of face images includes: extracting a face region image from the plurality of face images respectively, and continuously playing the face region image in the plurality of face images to obtain a motion map.
  • the method further includes: identifying a face identifier corresponding to the face image, and clustering the face image according to the face identifier to generate a face map corresponding to the face identifier.
  • Comparing the face information of the plurality of face images, determining whether the plurality of face images are continuous shooting images comprises: comparing face information of the plurality of face images in the same image set to determine whether the plurality of face images are Continuous shooting images.
  • the method further includes: if the contact information corresponding to the face identifier is stored in the computer device, obtaining a face map corresponding to the face identifier. Send the face image of the face map to the computer device corresponding to the contact.
  • the method further includes: acquiring a plurality of face images selected by the user, and generating a plurality of face images according to the user's selection order.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which acts as an external cache.
  • RAM is available in a variety of forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronization.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM dual data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM Link (Synchlink) DRAM
  • SLDRAM Memory Bus
  • Rambus Direct RAM
  • RDRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un procédé de traitement d'image, comprenant les étapes consistant : à réaliser une reconnaissance de visage humain sur une image à traiter, afin d'obtenir une image de visage ; à obtenir des informations de visage de ladite image de visage, lesdites informations de visage comprenant une zone de surface de visage, une position de visage et un angle de visage ; à comparer des informations de visage d'une pluralité d'images de visage, et à déterminer si ladite pluralité d'images de visage sont des images capturées en continu ; si la pluralité d'images de visage sont des images capturées en continu, à générer ensuite une image animée en fonction de la pluralité d'images de visage.
PCT/CN2018/115675 2017-11-29 2018-11-15 Procédé de traitement d'image, dispositif informatique et support d'informations lisible par ordinateur WO2019105237A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711225296.7A CN108022274B (zh) 2017-11-29 2017-11-29 图像处理方法、装置、计算机设备和计算机可读存储介质
CN201711225296.7 2017-11-29

Publications (1)

Publication Number Publication Date
WO2019105237A1 true WO2019105237A1 (fr) 2019-06-06

Family

ID=62077521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/115675 WO2019105237A1 (fr) 2017-11-29 2018-11-15 Procédé de traitement d'image, dispositif informatique et support d'informations lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN108022274B (fr)
WO (1) WO2019105237A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339811A (zh) * 2019-08-27 2020-06-26 杭州海康威视系统技术有限公司 图像处理方法、装置、设备及存储介质
CN111353368A (zh) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 云台摄像机、人脸特征处理方法及装置、电子设备
CN114153342A (zh) * 2020-08-18 2022-03-08 深圳市万普拉斯科技有限公司 视觉信息展示方法、装置、计算机设备和存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022274B (zh) * 2017-11-29 2021-10-01 Oppo广东移动通信有限公司 图像处理方法、装置、计算机设备和计算机可读存储介质
CN110008673B (zh) * 2019-03-06 2022-02-18 创新先进技术有限公司 一种基于人脸识别的身份鉴权方法和装置
CN109948586B (zh) * 2019-03-29 2021-06-25 北京三快在线科技有限公司 人脸验证的方法、装置、设备及存储介质
CN110490162A (zh) * 2019-08-23 2019-11-22 北京搜狐新时代信息技术有限公司 基于人脸识别解锁功能显示人脸变化的方法、装置和系统
CN110990088B (zh) * 2019-12-09 2023-08-11 Oppo广东移动通信有限公司 数据处理方法及相关设备
CN111754612A (zh) * 2020-06-01 2020-10-09 Oppo(重庆)智能科技有限公司 动图生成方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070254640A1 (en) * 2006-04-27 2007-11-01 Bliss Stephen J Remote control and viewfinder for mobile camera phone
CN104113682A (zh) * 2013-04-22 2014-10-22 联想(北京)有限公司 一种图像获取方法及电子设备
CN105069426A (zh) * 2015-07-31 2015-11-18 小米科技有限责任公司 相似图片判断方法以及装置
CN105630954A (zh) * 2015-12-23 2016-06-01 北京奇虎科技有限公司 一种基于照片合成动态图片的方法和装置
CN108022274A (zh) * 2017-11-29 2018-05-11 广东欧珀移动通信有限公司 图像处理方法、装置、计算机设备和计算机可读存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4092059B2 (ja) * 2000-03-03 2008-05-28 日本放送協会 画像認識装置
JP2009088687A (ja) * 2007-09-27 2009-04-23 Fujifilm Corp アルバム作成装置
JP5028225B2 (ja) * 2007-11-06 2012-09-19 オリンパスイメージング株式会社 画像合成装置、画像合成方法、およびプログラム
JP5176572B2 (ja) * 2008-02-05 2013-04-03 ソニー株式会社 画像処理装置および方法、並びにプログラム
JP4911165B2 (ja) * 2008-12-12 2012-04-04 カシオ計算機株式会社 撮像装置、顔検出方法及びプログラム
JP5821625B2 (ja) * 2011-08-29 2015-11-24 カシオ計算機株式会社 画像編集装置およびプログラム
CN104980641B (zh) * 2014-04-04 2019-01-08 宏碁股份有限公司 电子装置及其图像检视方法
CN104408402B (zh) * 2014-10-29 2018-04-24 小米科技有限责任公司 人脸识别方法及装置
JP6332864B2 (ja) * 2014-12-25 2018-05-30 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
CN104767933B (zh) * 2015-03-13 2018-12-21 北京畅游天下网络技术有限公司 一种具备拍照功能的便携数码设备及筛选照片的方法
CN104820675B (zh) * 2015-04-08 2018-11-06 小米科技有限责任公司 相册显示方法及装置
CN106375531A (zh) * 2016-08-29 2017-02-01 捷开通讯(深圳)有限公司 一种图片分享方法及终端
CN106980840A (zh) * 2017-03-31 2017-07-25 北京小米移动软件有限公司 脸型匹配方法、装置及存储介质
CN107358219B (zh) * 2017-07-24 2020-06-09 艾普柯微电子(上海)有限公司 人脸识别方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070254640A1 (en) * 2006-04-27 2007-11-01 Bliss Stephen J Remote control and viewfinder for mobile camera phone
CN104113682A (zh) * 2013-04-22 2014-10-22 联想(北京)有限公司 一种图像获取方法及电子设备
CN105069426A (zh) * 2015-07-31 2015-11-18 小米科技有限责任公司 相似图片判断方法以及装置
CN105630954A (zh) * 2015-12-23 2016-06-01 北京奇虎科技有限公司 一种基于照片合成动态图片的方法和装置
CN108022274A (zh) * 2017-11-29 2018-05-11 广东欧珀移动通信有限公司 图像处理方法、装置、计算机设备和计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353368A (zh) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 云台摄像机、人脸特征处理方法及装置、电子设备
CN111339811A (zh) * 2019-08-27 2020-06-26 杭州海康威视系统技术有限公司 图像处理方法、装置、设备及存储介质
CN111339811B (zh) * 2019-08-27 2024-02-20 杭州海康威视系统技术有限公司 图像处理方法、装置、设备及存储介质
CN114153342A (zh) * 2020-08-18 2022-03-08 深圳市万普拉斯科技有限公司 视觉信息展示方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN108022274A (zh) 2018-05-11
CN108022274B (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
WO2019105237A1 (fr) Procédé de traitement d'image, dispositif informatique et support d'informations lisible par ordinateur
CN111417028B (zh) 信息处理方法、装置、存储介质及电子设备
CN107977674B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
US10769464B2 (en) Facial recognition method and related product
WO2018010512A1 (fr) Procédé et dispositif de téléchargement ascendant d'un fichier de photographie
CN107729815B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
WO2019052316A1 (fr) Procédé et appareil de traitement d'images, support d'informations lisible par ordinateur et terminal mobile
JP2021524957A (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
CN106203254B (zh) 一种调整拍照方向的方法及装置
WO2019105457A1 (fr) Procédé de traitement d'image, dispositif informatique et support d'informations lisible par ordinateur
WO2020048392A1 (fr) Procédé, appareil, dispositif informatique et support de stockage de détection de virus d'application
WO2021073478A1 (fr) Procédé de reconnaissance d'informations de commentaires sur écran, procédé d'affichage, serveur et dispositif électronique
CN108683850B (zh) 一种拍摄提示方法及移动终端
JP7467667B2 (ja) 検出結果出力方法、電子機器及び媒体
CN108460817B (zh) 一种拼图方法及移动终端
CN109684277B (zh) 一种图像显示方法及终端
WO2019076377A1 (fr) Procédé de visualisation d'image et terminal mobile
CN109495638B (zh) 一种信息显示方法及终端
WO2019011108A1 (fr) Procédé de reconnaissance d'iris et produit associé
WO2019120190A1 (fr) Procédé de composition de numéro et terminal mobile
CN105335714A (zh) 照片处理方法、装置和设备
CN107864086B (zh) 信息快速分享方法、移动终端及计算机可读存储介质
US10970522B2 (en) Data processing method, electronic device, and computer-readable storage medium
WO2019052436A1 (fr) Procédé de traitement d'image, support de stockage lisible par ordinateur et terminal mobile
CN108848270B (zh) 一种截屏图像的处理方法和移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18883772

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18883772

Country of ref document: EP

Kind code of ref document: A1