US20220164920A1 - Method for processing image, electronic equipment, and storage medium - Google Patents

Method for processing image, electronic equipment, and storage medium Download PDF

Info

Publication number
US20220164920A1
US20220164920A1 US17/334,926 US202117334926A US2022164920A1 US 20220164920 A1 US20220164920 A1 US 20220164920A1 US 202117334926 A US202117334926 A US 202117334926A US 2022164920 A1 US2022164920 A1 US 2022164920A1
Authority
US
United States
Prior art keywords
image
frame image
image sequence
transform
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/334,926
Other versions
US11532069B2 (en
Inventor
Xian HU
Wei Deng
Jun Yi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Technology Wuhan Co Ltd
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Xiaomi Technology Wuhan Co Ltd
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Technology Wuhan Co Ltd, Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Xiaomi Technology Wuhan Co Ltd
Assigned to Beijing Xiaomi Pinecone Electronics Co., Ltd., Xiaomi Technology (Wuhan) Co., Ltd. reassignment Beijing Xiaomi Pinecone Electronics Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DENG, WEI, HU, Xian, YI, JUN
Publication of US20220164920A1 publication Critical patent/US20220164920A1/en
Application granted granted Critical
Publication of US11532069B2 publication Critical patent/US11532069B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • G06T3/18
    • G06T3/10
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/0056Geometric image transformation in the plane of the image the transformation method being selected according to the characteristics of the input image
    • G06K9/00268
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing

Definitions

  • Some electronic equipment support face transformation.
  • a user may provide several images with faces to generate a high-quality face transform video automatically, providing a novel and fast face special effect experience, with controllable number of video frames and speed of face transformation.
  • geometric transformation tends to lead to a great change in the location of a feature point, which may lead to phenomena such as overlap, misalignment, etc., resulting in an unstable region of a frame image generated by the transformation, as well as subtle jitters in a synthesized video, impacting user experience greatly.
  • the present disclosure may relate to the field of image transform.
  • the present disclosure provides a method for processing an image, electronic equipment, and a storage medium.
  • a method for processing an image including:
  • the processor may be configured to implement acquiring at least two images; acquiring at least two crop images by cropping the at least two images for face-containing images; of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image; performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence; performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence; fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and of the at least two crop images, coding a video frame sequence
  • a non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor of electronic equipment, cause the electronic equipment to implement acquiring at least two images; acquiring at least two crop images by cropping the at least two images for face-containing images; of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image; performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence; performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence; fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and of the at least two
  • FIG. 1 is a flowchart of a method for processing an image according to examples of the present disclosure.
  • FIG. 2 is a schematic diagram of a structure of a device for processing an image according to examples of the present disclosure.
  • FIG. 3 is a block diagram of electronic equipment according to examples of the present disclosure.
  • first, second, third may be adopted in an example herein to describe various kinds of information, such information should not be limited to such a term. Such a term is merely for distinguishing information of the same type.
  • first information may also be referred to as the second information.
  • second information may also be referred to as the first information.
  • a “if” as used herein may be interpreted as “when” or “while” or “in response to determining that”.
  • a block diagram shown in the accompanying drawings may be a functional entity which may not necessarily correspond to a physically or logically independent entity.
  • Such a functional entity may be implemented in form of software, in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
  • a terminal may sometimes be referred to as a smart terminal.
  • the terminal may be a mobile terminal.
  • the terminal may also be referred to as User Equipment (UE), a Mobile Station (MS), etc.
  • UE User Equipment
  • MS Mobile Station
  • a terminal may be equipment or a chip provided therein that provides a user with a voice and/or data connection, such as handheld equipment, onboard equipment, etc., with a wireless connection function.
  • Examples of a terminal may include a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), wearable equipment, Virtual Reality (VR) equipment, Augmented Reality (AR) equipment, a wireless terminal in industrial control, a wireless terminal in unmanned drive, a wireless terminal in remote surgery, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in smart city, a wireless terminal in smart home, etc.
  • MID Mobile Internet Device
  • VR Virtual Reality
  • AR Augmented Reality
  • a wireless terminal in industrial control a wireless terminal in unmanned drive, a wireless terminal in remote surgery, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in smart city, a wireless terminal in smart home, etc.
  • FIG. 1 is a flowchart of a method for processing an image according to examples of the present disclosure. As shown in FIG. 1 , a method for processing an image according to examples of the present disclosure includes steps as follows.
  • multiple face images to be transformed may be input. After acquiring the images, face recognition is performed on at least two images acquired to determine a face in the at least two images.
  • face recognition is performed on at least two images acquired to determine a face in the at least two images.
  • an input face image is identified in order to detect whether there is a face in the image, and to determine whether the face in the image meets a corresponding requirement, for example, so as to select an image with a clear and complete face.
  • the requirement may be, for example, whether a face detection frame output by face recognition intersects an image boundary, whether the size of a recognized face is too small, etc.
  • a face image meeting a corresponding requirement is processed. That is, an image that does not meet the requirement is excluded.
  • an image that does not include a face an image with a face detection frame output by face recognition intersecting an image boundary, an image in which the size of a recognized face is too small, etc.
  • a face in an image may be determined via the face recognition frame technology. Since the face is to be transformed, the image content unrelated to the face may be removed. That is, a face in an image may be recognized and cropped via the face detection frame technology. In examples of the present disclosure, it is also possible to recognize a remaining face in the image. When it is determined that the ratio of the region of a face to the entire image is too small, that is, when the face is too small, the small face is removed.
  • a CenterFace network may be used to detect a face in the at least two images to determine whether a face image is included, whether an included face image meets a processing requirement, etc.
  • a face located at the center of the image or a face located with a minimum deviation from the center is taken as an effective face.
  • the effective face is determined as a face to be processed.
  • only one face image is kept in an image by re-cropping the image including multiple faces.
  • At least two crop images are acquired by cropping the at least two images for face-containing images.
  • feature points of a face contained in the at least two images may be determined.
  • a first region may be determined based on the feature points for the face-containing images in the at least two images.
  • the at least two crop images may be acquired by cropping the face-containing images based on the first region.
  • after a face image in an image input by a user has been determined, feature points in the face are to be identified.
  • processing is to be focused on the feature points in the face when face transformation is performed.
  • the effect of display of the feature points in the face determines the effect of display of the face.
  • Related transformation is to be performed on the feature points in the face to render a transform video more stable and with improved transform effect.
  • feature points in a face may include a front feature point such as an eye, a nose tip, a mouth corner point, an eyebrow, a cheek, etc., and may also include a contour point such as an eye, a nose, lips, an eyebrow, a cheek, etc.
  • a front feature point such as an eye, a nose tip, a mouth corner point, an eyebrow, a cheek, etc.
  • a contour point such as an eye, a nose, lips, an eyebrow, a cheek, etc.
  • the ear and a contour point thereof may be determined as a feature point of the face.
  • the first region may be determined based on the feature points.
  • the face-containing images may be cropped based on the first region and the size of the destined object.
  • the crop image may be scaled to the size of the destined object.
  • the first region may be determined based on the feature points as follows.
  • a circumscription frame circumscribing the feature points may be determined according to location information of the feature points.
  • a width of the first region may be determined according to a center point of the circumscription frame and an image width boundary of the face to be processed.
  • a height of the first region may be determined according to a preset aspect ratio and the width of the first region.
  • the circumscription frame may be a rectangular circumscription frame, a circular circumscription frame, or a polygonal circumscription frame, etc., as long as a clear face image may be acquired. Try to locate a face at the center of the image in a non-deformable manner.
  • the specific shape of the circumscription frame is not limited in examples of the present disclosure.
  • the rectangular circumscription frame circumscribing feature points of the face are determined according to coordinates of the feature points in the face image.
  • the width and the height of the rectangular frame of the face are denoted by W, h, respectively.
  • PFLD Practical Facial Landmark Detectorlink
  • a Practical Facial Landmark Detectorlink (PFLD) network may be used to locate a landmark of a crop face image to determine the feature points of the face. Coordinates of the center of the rectangular frame of the face are denoted by (x d , y c ).
  • the width and the height of the source image are w src and h src , respectively.
  • the destined width and height of the ultimate generated video are denoted by w dst and h dst .
  • the bottom left vertex of the rectangular frame of the face is taken as the origin.
  • the height of the crop image (corresponding to the first region) is computed as
  • h crop w crop r d ⁇ s ⁇ t .
  • the image may be cropped first in the height direction. That is, referring to the closest of the distances from the center to the upper boundary and the lower boundary of the image, the image is cropped at the opposite side.
  • the to-be-processed crop image is scaled using a scaling ratio computed with the height and the height of the destined image.
  • the image acquired may be cropped via the first region, and scaled to the width w dst and the height h dst .
  • the face may be made to be located as close to the center of the image as possible without distorting and deforming the face, which meets popular aesthetics.
  • triangular patch deformation is performed on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image.
  • a second similarity transformation matrix and a first similarity transformation matrix between the first coordinate set and the second coordinate set may be computed.
  • a first location difference between a feature point of the first image transformed by the first similarity transformation matrix and a corresponding feature point of the second image may be determined.
  • a second location difference between a feature point of the second image transformed by the second similarity transformation matrix and a corresponding feature point of the first image may be determined.
  • a first feature point location track from the first image to the second image may be computed according to the first location difference.
  • a second feature point location track from the second image to the first image may be computed according to the second location difference.
  • the first triangular patch deformation frame image sequence may be acquired according to the first feature point location track.
  • the second triangular patch deformation frame image sequence may be acquired according to the second feature point location track.
  • the first feature point location track from the first image to the second image may be computed according to the first location difference as follows.
  • the first feature point location track may be computed as:
  • a ⁇ B i N ⁇ D A ⁇ B + s A .
  • the s i:A ⁇ B may be the first feature point location track.
  • the N may be a number of transform frames transforming the first image into the second image.
  • the i may be an ith frame image in transform image frames.
  • the i may be an integer greater than 0 and less than or equal to the N.
  • the s A may be the first coordinate set.
  • the D A ⁇ B may be the first location difference.
  • the second feature point location track from the second image to the first image may be computed according to the second location difference as follows.
  • the second feature point location track may be computed as:
  • the s i:B ⁇ A may be the second feature point location track.
  • the s B may be the second coordinate set.
  • the D B ⁇ A may be the second location difference.
  • the first image and the second image that neighbour each other may be images A and B, respectively, for example, merely to illustrate the nature of the technical solution of examples of the present disclosure, instead of limiting the technical means thereof.
  • triangular patch deformation are performed on neighbour images in the images, to generate, for all neighbour images such as the neighbour images A and B, a triangular patch deformation frame image sequence from image A to image B and a triangular patch deformation frame image sequence from image B to image A, specifically as follows.
  • the similarity transformation matrix between s A and s B i.e., the similarity transformation matrix between two neighbour face images A and B, including the similarity transformation matrix from A to B (corresponding to the first similarity transformation matrix) and the similarity transformation matrix from B to A (corresponding to the second similarity transformation matrix), denoted by T A ⁇ B and T B ⁇ A , respectively, may be solved using a similarity transformation solution such as an estimateAffinePartial2D function method (as an implementation).
  • a ⁇ B corresponding to the first location difference
  • D B ⁇ A the difference between the location of a feature point on image B, subject to similarity transformation, and the location of a corresponding feature point on image A, denoted by D B ⁇ A (corresponding to the second location difference), is computed as:
  • the destined location track of the triangle change of the feature points is acquired by breaking down the difference in the location by frame. Assume that a set number of N transform image frames (corresponding to the number of transform frames) are generated between A and B. As an implementation, the N may take values 25, 30, 35, 40, etc. Then, the feature point location track (corresponding to the first feature point location track) transforming the face image A to the face image B may be computed according to D A ⁇ B , as:
  • the s i:A ⁇ B denotes the location of the feature point of the ith frame image transforming A into B by triangular deformation.
  • the i is an integer greater than 0 and less than or equal to the N.
  • the feature point location track (corresponding to the second feature point location track) transforming the face image B to the face image A may be computed according to D B ⁇ A , as:
  • the s i:B ⁇ A denotes the location of the feature point of the ith frame image transforming B into A by triangular deformation.
  • Triangular patch partitioning is performed using a triangular patch partitioning method according to the feature point location track s i:A ⁇ B transforming the image A into the image B by triangular patch deformation.
  • a delaunay triangulated graph algorithm may be used to triangulate the image.
  • midpoints of the four sides and four vertices of the image A are respectively included in a total of N+8 points to triangulate the image A (including the face and the background).
  • the f i denotes the ith frame image acquired by performing triangle deformation from the image A.
  • the h i denotes the ith frame image acquired by performing triangle deformation from the image B.
  • the N denotes the number of frame images generated by transforming the image A into the image B or transforming the image B to the image A.
  • Similarity transformation may be performed on each image sequence of the first triangular patch deformation frame image sequence, acquiring the first transform frame image sequence, as follows.
  • a first similarity transformation matrix transforming the first image into the second image may be determined.
  • a step interval of a first transform parameter of the first similarity transformation matrix may be determined according to a number of transform frames transforming the first image into the second image.
  • a first similarity transformation sub-matrix for each transform frame image sequence of the first transform frame image sequence may be computed based on the first similarity transformation matrix and the step interval of the first transform parameter.
  • the first transform frame image sequence may be acquired by performing multiplication on each triangular patch deformation frame image in the first triangular patch deformation frame image sequence according to the first similarity transformation sub-matrix.
  • Similarity transformation may be performed on each image sequence of the second triangular patch deformation frame image sequence, acquiring the second transform frame image sequence, as follows.
  • a second similarity transformation sub-matrix may be acquired by performing inverse transformation on the first similarity transformation sub-matrix.
  • the second transformation frame image sequence may be acquired by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • the second transform frame image sequence may be acquired as follows.
  • a second similarity transformation matrix transforming the second image into the first image may be determined.
  • a step interval of a second transform parameter of the second similarity transformation matrix may be determined according to a number of transform frames transforming the second image into the first image.
  • a second similarity transformation sub-matrix for each transform frame image sequence of the second transform frame image sequence may be computed based on the second similarity transformation matrix and the step interval of the second transform parameter.
  • the second transformation frame image sequence may be acquired by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • the number of transform frames transforming the second image into the first image is the same as the number of transform frames transforming the first image into the second image.
  • the similarity transformation matrix T B ⁇ A (corresponding to the second similarity transformation matrix) transforming the image B into the image A may be expressed as:
  • T B ⁇ A [ s * r 11 s * r 12 t x s * r 21 s * r 22 t y 0 0 1 ]
  • the s denotes a scaling factor.
  • the r 11 , r 12 , r 21 , and r 22 denote rotation factors.
  • the t x and the t y denote translation factors.
  • a matrix parameter is decomposed, acquiring a parameter step interval for each frame transformation.
  • the parameter (corresponding to the second transformation parameter) may include a scaling parameter, a rotation parameter, a translation parameter, etc.
  • the computation formula is as follows:
  • the ⁇ s denotes the step interval of the scaling parameter.
  • the e is the constant in the logarithm operation.
  • the ⁇ r denotes a parameter step interval of the scaling parameter.
  • the ⁇ denotes a rotation angle corresponding to the rotation parameters.
  • the ⁇ t x and the ⁇ t y denote parameter step intervals of the translation parameters in the x direction and the y direction, respectively.
  • the similarity transformation matrix corresponding to each frame similarity transformation transforming the image B into the image A, is solved according to the parameter step intervals of each frame transformation described above, with a specific construction as follows:
  • the T i:B ⁇ A denotes the similarity transformation matrix (corresponding to the second similarity transformation sub-matrix) corresponding to similarity transformation transforming the image B into the ith frame of A.
  • the similarity transformation matrix (corresponding to the first similarity transformation sub-matrix) corresponding to similarity transformation transforming the image A into the ith frame of B may further be acquired based on T i:B ⁇ A , as follows:
  • T i:a ⁇ B T i:B ⁇ A ⁇ T B ⁇ A ⁇ 1
  • the T i:A ⁇ B denotes the similarity transformation matrix (corresponding to the second similarity transformation sub-matrix) corresponding to transforming the image A into the ith frame of the image B.
  • the T B ⁇ A ⁇ 1 denotes the inverse matrix of the similarity transformation matrix T B ⁇ A transforming the image B into the A. It should be noted that the similarity transformation matrix corresponding to transforming the image B into the ith frame of the image A may be determined based on the inverse matrix of the similarity transformation matrix corresponding to transforming the image A into the ith frame of the image B.
  • the similarity transformation matrix corresponding to transforming the image B into the ith frame of the image A may be computed in the mode of computing the inverse matrix of the similarity transformation matrix corresponding to transforming the image A into the ith frame of the image B, details of which is not repeated.
  • the ft i denotes the ith frame image acquired by transforming the image A.
  • a similarity transformation frame image sequence HT B ⁇ A ⁇ ht 1 , ht 2 , . . . , ht N ⁇ is acquired by performing similarity transformation on the ith frame of the deformation frame image sequence H B ⁇ A according to T i:B ⁇ A .
  • the ht i denotes the ith frame image acquired by transforming the image B.
  • the first transform frame image sequence and the second transform frame image sequence are fused, acquiring a video frame sequence corresponding to the first image and the second image.
  • the first transform frame image sequence and the second transform frame image sequence may be fused as follows.
  • a first weight and a second weight may be determined.
  • the first weight may be a weight of an ith frame image of the first transform frame image sequence during fusion.
  • the second weight may be a weight of an ith frame image of the second transform frame image sequence during fusion.
  • the i may be greater than 0 and less than or equal to a number of frames of the first transform frame image sequence.
  • Each pixel of the ith frame image of the first transform frame image sequence may be multiplied by the first weight, acquiring a first to-be-fused image.
  • the ith frame image of the second transform frame image sequence may be multiplied by the second weight, acquiring a second to-be-fused image.
  • Each pixel in the first to-be-fused image and each pixel in the second to-be-fused image may be superimposed, respectively, acquiring an ith frame fusion image corresponding to the ith frame image of the first transform frame image sequence and the ith frame image of the second transform frame image sequence.
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • the q i denotes the ith frame image of the video frame sequence.
  • a fusion formula may be expressed as:
  • the w i denotes the weight of the weighted fusion.
  • the non-linear design of the w i may make the fusion process uneven, and the generated transform video more rhythmic.
  • a video frame sequence generated by all neighbour images are coded, acquiring a destined video.
  • a video frame sequence generated by all neighbour images such as the image sequence Q generated by the images A and B, are coded according to a set frame rate to synthesize a final face transform special-effect video.
  • a user may input a profile image per se and an image of a transform object such as a star.
  • the profile image of the user may be transformed gradually into the profile image of the star by video transformation.
  • the user may also input any profile image.
  • the user may input a profile image from the infancy, the childhood, teenage, adulthood, . . . , till the current period of the user per se, respectively.
  • the user may view a video of the user per se gradually transforming from the infancy to the current image, by transplanting the technical solution of examples of the present disclosure to electronic equipment such as a mobile phone, a notebook computer, a game machine, a tablet computer, a personal digital assistant, a television, etc.
  • electronic equipment such as a mobile phone, a notebook computer, a game machine, a tablet computer, a personal digital assistant, a television, etc.
  • FIG. 2 is a schematic diagram of a structure of a device for processing an image according to examples of the present disclosure. As shown in FIG. 2 , the device for processing an image according to examples of the present disclosure includes a unit as follows.
  • An acquiring unit 20 is configured to acquire at least two images.
  • a cropping unit 21 is configured to acquire at least two crop images by cropping the at least two images for face-containing images.
  • a triangular patch deformation unit 22 is configured to, of the at least two crop images, perform triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image.
  • a similarity transformation unit 23 is configured to perform similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence; and perform similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence.
  • a fusing unit 24 is configured to fuse the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image.
  • a video generating unit 25 is configured to, of the at least two crop images, code a video frame sequence generated by all neighbour images, acquiring a destined video.
  • the cropping unit 21 is further configured to:
  • the cropping unit 21 is further configured to:
  • the triangular patch deformation unit 22 is further configured to:
  • the triangular patch deformation unit 22 is further configured to:
  • a -> B 1 N ⁇ D A -> B + s A .
  • the s i:A ⁇ B is the first feature point location track.
  • the N is a number of transform frames transforming the first image into the second image.
  • the i is an ith frame image in transform image frames.
  • the i is an integer greater than 0 and less than or equal to the N.
  • the s A is the first coordinate set.
  • the D A ⁇ B is the first location difference,
  • the triangular patch deformation unit is further configured to compue the second feature point location track as:
  • the s i:B ⁇ A is the second feature point location track.
  • the s B is the second coordinate set.
  • the D B ⁇ A is the second location difference.
  • the similarity transformation unit 23 is further configured to:
  • the similarity transformation unit 23 may be further configured to:
  • the similarity transformation unit may be further configured to:
  • the fusing unit 24 is further configured to:
  • the first weight being a weight of an ith frame image of the first transform frame image sequence during fusion
  • the second weight being a weight of an ith frame image of the second transform frame image sequence during fusion
  • the i being greater than 0 and less than or equal to a number of frames of the first transform frame image sequence
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • the feature points include at least one of:
  • an eye a nose tip, a mouth corner point, an eyebrow, a cheek, and a contour point of an eye, a nose, lips, an eyebrow, and a cheek.
  • the determining unit 20 , the cropping unit 21 , the triangular patch deformation unit 22 , the similarity transformation unit 23 , the fusing unit 24 , and the video generating unit 25 , etc. may be implemented by one or more Central Processing Units (CPU), Graphics Processing Units (GPU), base processors (BP), Application Specific Integrated Circuits (ASIC), DSPs, Programmable Logic Devices (PLD), Complex Programmable Logic Devices (CPLD), Field-Programmable Gate Arrays (FPGA), general purpose processors, controllers, Micro Controller Units (MCU), Microprocessors, or other electronic components, or may be implemented in conjunction with one or more radio frequency (RF) antennas, for performing the foregoing text processing device.
  • CPU Central Processing Units
  • GPU Graphics Processing Units
  • BP base processors
  • ASIC Application Specific Integrated Circuits
  • DSPs Digital Signal-Des
  • PLD Programmable Logic Devices
  • CPLD Complex Programmable Logic Devices
  • FPGA Field-
  • a module as well as unit of the device for processing an image according to an aforementioned example herein may perform an operation in a mode elaborated in an aforementioned example of the device herein, which will not be repeated here.
  • FIG. 3 is a block diagram of electronic equipment 800 according to an illustrative example. As shown in FIG. 3 , the electronic equipment 800 supports multi-screen output.
  • the electronic equipment 800 may include one or more components as follows: a processing component 802 , a memory 804 , a power component 806 , a multimedia component 808 , an audio component 810 , an Input/Output (I/O) interface 812 , a sensor component 814 , and a communication component 816 .
  • the processing component 802 generally controls an overall operation of the display equipment, such as operations associated with display, a telephone call, data communication, a camera operation, a recording operation, etc.
  • the processing component 802 may include one or more processors 820 to execute instructions so as to complete all or some steps of the method.
  • the processing component 802 may include one or more modules to facilitate interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802 .
  • the memory 804 is configured to store various types of data to support operation on the electronic equipment 800 . Examples of these data include instructions of any application or method configured to operate on the electronic equipment 800 , contact data, phonebook data, messages, images, videos, and/or the like.
  • the memory 804 may be realized by any type of volatile or non-volatile storage equipment or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic memory, flash memory, magnetic disk, or compact disk.
  • SRAM Static Random Access Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • PROM Programmable Read-Only Memory
  • ROM Read-Only Memory
  • magnetic memory flash memory, magnetic disk, or compact disk.
  • the power component 806 supplies electric power to various components of the electronic equipment 800 .
  • the power component 806 may include a power management system, one or more power supplies, and other components related to generating, managing and distributing electric power for the electronic equipment 800 .
  • the multimedia component 808 includes a screen providing an output interface between the electronic equipment 800 and a user.
  • the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a TP, the screen may be realized as a touch screen to receive an input signal from a user.
  • the TP includes one or more touch sensors for sensing touch, slide and gestures on the TP. The touch sensors not only may sense the boundary of a touch or slide move, but also detect the duration and pressure related to the touch or slide move.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic equipment 800 is in an operation mode such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front camera and/or the rear camera may be a fixed optical lens system or may have a focal length and be capable of optical zooming.
  • the audio component 810 is configured to output and/or input an audio signal.
  • the audio component 810 includes a microphone (MIC).
  • the MIC is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 804 or may be sent via the communication component 816 .
  • the audio component 810 further includes a loudspeaker configured to output the audio signal.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the peripheral interface module may be a keypad, a click wheel, a button, etc. These buttons may include but are not limited to: a homepage button, a volume button, a start button, and a lock button.
  • the sensor component 814 includes one or more sensors for assessing various states of the electronic equipment 800 .
  • the sensor component 814 may detect an on/off state of the electronic equipment 800 and relative locationing of components such as the display and the keypad of the electronic equipment 800 .
  • the sensor component 814 may further detect a change in the location of the electronic equipment 800 or of a component of the electronic equipment 800 , whether there is contact between the electronic equipment 800 and a user, the orientation or acceleration/deceleration of the electronic equipment 800 , and a change in the temperature of the electronic equipment 800 .
  • the sensor component 814 may include a proximity sensor configured to detect existence of a nearby object without physical contact.
  • the sensor component 814 may further include an optical sensor such as a Complementary Metal-Oxide-Semiconductor (CMOS) or Charge-Coupled-Device (CCD) image sensor used in an imaging application.
  • CMOS Complementary Metal-Oxide-Semiconductor
  • CCD Charge-Coupled-Device
  • the sensor component 814 may further include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless/radio communication between the electronic equipment 800 and other equipment.
  • the electronic equipment 800 may access a radio network based on a communication standard such as WiFi, 2G, 3G, . . . , or a combination thereof.
  • the communication component 816 broadcasts related information or receives a broadcast signal from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a Near Field Communication (NFC) module for short-range communication.
  • the NFC module may be realized based on Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra-WideBand (UWB) technology, BlueTooth (BT) technology, and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra-WideBand
  • BT BlueTooth
  • the electronic equipment 800 may be realized by one or more of Application Specific Integrated Circuits (ASIC), Digital Signal Processors (DSP), Digital Signal Processing Device (DSPD), Programmable Logic Devices (PLD), Field Programmable Gate Arrays (FPGA), controllers, microcontrollers, microprocessors or other electronic components, to implement the method.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processors
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Devices
  • FPGA Field Programmable Gate Arrays
  • controllers microcontrollers, microprocessors or other electronic components, to implement the method.
  • a non-transitory computer-readable storage medium including instructions such as the memory 804 including instructions, is further provided.
  • the instructions may be executed by the processor 820 of the electronic equipment 800 to implement a step of the method for processing an image of an example herein.
  • the non-transitory computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, optical data storage equipment, etc.
  • Examples of the present disclosure further disclose a non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor of electronic equipment, allow the electronic equipment to implement a control method.
  • the method includes:
  • acquiring the at least two crop images by cropping the at least two images for the face-containing images includes:
  • determining the first region based on the feature points includes:
  • generating the first triangular patch deformation frame image sequence from the first image to the second image, and the second triangular patch deformation frame image sequence from the second image to the first image includes:
  • computing the first feature point location track from the first image to the second image according to the first location difference includes:
  • a -> B 1 N ⁇ D A -> B + s A .
  • the s i:A ⁇ B is the first feature point location track
  • the N is a number of transform frames transforming the first image into the second image
  • the i is an ith frame image in transform image frames
  • the i is an integer greater than 0 and less than or equal to the N
  • the s A is the first coordinate set
  • the D A ⁇ B is the first location difference.
  • Computing the second feature point location track from the second image to the first image according to the second location difference may include:
  • the s i:B ⁇ A is the second feature point location track
  • the s B is the second coordinate set
  • the D B ⁇ A is the second location difference.
  • performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence includes:
  • performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence includes:
  • fusing the first transform frame image sequence and the second transform frame image sequence, acquiring the video frame sequence corresponding to the first image and the second image includes:
  • the first weight being a weight of an ith frame image of the first transform frame image sequence during fusion
  • the second weight being a weight of an ith frame image of the second transform frame image sequence during fusion
  • the i being greater than 0 and less than or equal to a number of frames of the first transform frame image sequence
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • the feature points include at least one of:
  • an eye a nose tip, a mouth corner point, an eyebrow, a cheek, and a contour point of an eye, a nose, lips, an eyebrow, and a cheek.
  • a method for processing an image including:
  • acquiring the at least two crop images by cropping the at least two images for the face-containing images includes:
  • determining the first region based on the feature points includes:
  • generating the first triangular patch deformation frame image sequence from the first image to the second image, and the second triangular patch deformation frame image sequence from the second image to the first image includes:
  • computing the first feature point location track from the first image to the second image according to the first location difference includes:
  • a -> B 1 N ⁇ D A -> B + s A .
  • the s i:A ⁇ B is the first feature point location track
  • the N is a number of transform frames transforming the first image into the second image
  • the i is an ith frame image in transform image frames
  • the i is an integer greater than 0 and less than or equal to the N
  • the s A is the first coordinate set
  • the D A ⁇ B is the first location difference.
  • Computing the second feature point location track from the second image to the first image according to the second location difference may include:
  • the s i:B ⁇ A is the second feature point location track
  • the s B is the second coordinate set
  • the D B ⁇ A is the second location difference.
  • performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence includes:
  • performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence includes:
  • fusing the first transform frame image sequence and the second transform frame image sequence, acquiring the video frame sequence corresponding to the first image and the second image includes:
  • the first weight being a weight of an ith frame image of the first transform frame image sequence during fusion
  • the second weight being a weight of an ith frame image of the second transform frame image sequence during fusion
  • the i being greater than 0 and less than or equal to a number of frames of the first transform frame image sequence
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • a device for processing an image including:
  • an acquiring unit configured to acquire at least two images
  • a cropping unit configured to acquire at least two crop images by cropping the at least two images for face-containing images
  • a triangular patch deformation unit configured to, of the at least two crop images, perform triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image;
  • a similarity transformation unit configured to perform similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence; and perform similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence;
  • a fusing unit configured to fuse the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image;
  • a video generating unit configured to, of the at least two crop images, code a video frame sequence generated by all neighbour images, acquiring a destined video.
  • the cropping unit is further configured to:
  • the cropping unit is further configured to:
  • the triangular patch deformation unit is further configured to:
  • the similarity transformation unit is further configured to:
  • the similarity transformation unit is further configured to:
  • the fusing unit is further configured to:
  • the first weight being a weight of an ith frame image of the first transform frame image sequence during fusion
  • the second weight being a weight of an ith frame image of the second transform frame image sequence during fusion
  • the i being greater than 0 and less than or equal to a number of frames of the first transform frame image sequence
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • electronic equipment including a processor and a memory for storing processor executable instructions.
  • the processor is configured to implement a step of the method for processing an image by calling the executable instructions in the memory.
  • a non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor of electronic equipment, allow the electronic equipment to implement a step of the method for processing an image.
  • a technical solution provided by examples of the present disclosure may include beneficial effects as follows.
  • a face of a portrait to be transformed is cropped. Then, triangular patch deformation is performed. Similarity transformation is performed on each image sequence in the deformation frame image sequence, avoiding an error and a jitter of an image transform frame, improving quality of the face transform video, improving face transform stability, improving user experience greatly.
  • the present disclosure may include dedicated hardware implementations such as application specific integrated circuits, programmable logic arrays and other hardware devices.
  • the hardware implementations can be constructed to implement one or more of the methods described herein. Examples that may include the apparatus and systems of various implementations can broadly include a variety of electronic and computing systems.
  • One or more examples described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the system disclosed may encompass software, firmware, and hardware implementations.
  • module may include memory (shared, dedicated, or group) that stores code or instructions that can be executed by one or more processors.
  • the module refers herein may include one or more circuit with or without stored code or instructions.
  • the module or circuit may include one or more components that are connected.

Abstract

At least two images are acquired. At least two crop images are acquired by cropping the at least two images for face-containing images. Triangular patch deformation is performed on two neighbour images, generating a first triangular patch deformation frame image sequence and a second triangular patch deformation frame image sequence. Similarity transformation is performed on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence. Similarity transformation is performed on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence. The first and the second transform frame image sequences are fused, acquiring a video frame sequence corresponding to the two neighbour images. Of the at least two images, a video frame sequence generated by all neighbour images is coded, acquiring a destined video.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is based on, and claims the priority to, Chinese Application No. 202011314245.3 filed on Nov. 20, 2020. The content of the Chinese Application is hereby incorporated by reference in its entirety for all purposes.
  • BACKGROUND
  • Some electronic equipment support face transformation. A user may provide several images with faces to generate a high-quality face transform video automatically, providing a novel and fast face special effect experience, with controllable number of video frames and speed of face transformation. In a face transformation algorithm, when postures of two faces for transformation differ greatly, geometric transformation tends to lead to a great change in the location of a feature point, which may lead to phenomena such as overlap, misalignment, etc., resulting in an unstable region of a frame image generated by the transformation, as well as subtle jitters in a synthesized video, impacting user experience greatly.
  • SUMMARY
  • The present disclosure may relate to the field of image transform. The present disclosure provides a method for processing an image, electronic equipment, and a storage medium.
  • According to an aspect of examples of the present disclosure, there is provided a method for processing an image, including:
  • acquiring at least two images;
  • acquiring at least two crop images by cropping the at least two images for face-containing images;
  • of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image;
  • performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence;
  • performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence;
  • fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and
  • of the at least two crop images, coding a video frame sequence generated by all neighbour images, acquiring a destined video.
  • According to an aspect of examples of the present disclosure, there is provided electronic equipment including a processor and a memory for storing processor executable instructions. The processor may be configured to implement acquiring at least two images; acquiring at least two crop images by cropping the at least two images for face-containing images; of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image; performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence; performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence; fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and of the at least two crop images, coding a video frame sequence generated by all neighbour images, acquiring a destined video.
  • According to an aspect of examples of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor of electronic equipment, cause the electronic equipment to implement acquiring at least two images; acquiring at least two crop images by cropping the at least two images for face-containing images; of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image; performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence; performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence; fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and of the at least two crop images, coding a video frame sequence generated by all neighbour images, acquiring a destined video.
  • It should be understood that the general description above and the detailed description below are illustrative and explanatory only, and do not limit the present disclosure.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and form a part of this specification, illustrate examples consistent with the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
  • FIG. 1 is a flowchart of a method for processing an image according to examples of the present disclosure.
  • FIG. 2 is a schematic diagram of a structure of a device for processing an image according to examples of the present disclosure.
  • FIG. 3 is a block diagram of electronic equipment according to examples of the present disclosure.
  • DETAILED DESCRIPTION
  • Reference will now be made in detail to illustrative examples, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of illustrative examples do not represent all implementations consistent with the present disclosure. Instead, they are merely examples of devices and methods consistent with aspects related to the present disclosure. The illustrative implementation modes may take on multiple forms, and should not be taken as being limited to examples illustrated herein. Instead, by providing such implementation modes, examples herein may become more comprehensive and complete, and comprehensive concept of the illustrative implementation modes may be delivered to those skilled in the art. Implementations set forth in the following illustrative examples do not represent all implementations in accordance with the subject disclosure. Rather, they are merely examples of the apparatus and method in accordance with certain aspects herein.
  • Note that although a term such as first, second, third may be adopted in an example herein to describe various kinds of information, such information should not be limited to such a term. Such a term is merely for distinguishing information of the same type. For example, without departing from the scope of the examples herein, the first information may also be referred to as the second information. Similarly, the second information may also be referred to as the first information. Depending on the context, a “if” as used herein may be interpreted as “when” or “while” or “in response to determining that”.
  • In addition, described characteristics, structures or features may be combined in one or more implementation modes in any proper manner. In the following descriptions, many details are provided to allow a full understanding of examples herein. However, those skilled in the art will know that the technical solutions of examples herein may be carried out without one or more of the details; alternatively, another method, component, device, option, etc., may be adopted. Under other conditions, no detail of a known structure, method, device, implementation, material or operation may be shown or described to avoid obscuring aspects of examples herein.
  • A block diagram shown in the accompanying drawings may be a functional entity which may not necessarily correspond to a physically or logically independent entity. Such a functional entity may be implemented in form of software, in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
  • A terminal may sometimes be referred to as a smart terminal. The terminal may be a mobile terminal. The terminal may also be referred to as User Equipment (UE), a Mobile Station (MS), etc. A terminal may be equipment or a chip provided therein that provides a user with a voice and/or data connection, such as handheld equipment, onboard equipment, etc., with a wireless connection function. Examples of a terminal may include a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), wearable equipment, Virtual Reality (VR) equipment, Augmented Reality (AR) equipment, a wireless terminal in industrial control, a wireless terminal in unmanned drive, a wireless terminal in remote surgery, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in smart city, a wireless terminal in smart home, etc.
  • FIG. 1 is a flowchart of a method for processing an image according to examples of the present disclosure. As shown in FIG. 1, a method for processing an image according to examples of the present disclosure includes steps as follows.
  • In S11, at least two images are acquired.
  • In examples of the present disclosure, to perform face transformation, multiple face images to be transformed may be input. After acquiring the images, face recognition is performed on at least two images acquired to determine a face in the at least two images. Specifically, in examples of the present disclosure, an input face image is identified in order to detect whether there is a face in the image, and to determine whether the face in the image meets a corresponding requirement, for example, so as to select an image with a clear and complete face. The requirement may be, for example, whether a face detection frame output by face recognition intersects an image boundary, whether the size of a recognized face is too small, etc. A face image meeting a corresponding requirement is processed. That is, an image that does not meet the requirement is excluded. For example, an image that does not include a face, an image with a face detection frame output by face recognition intersecting an image boundary, an image in which the size of a recognized face is too small, etc. In examples of the present disclosure, a face in an image may be determined via the face recognition frame technology. Since the face is to be transformed, the image content unrelated to the face may be removed. That is, a face in an image may be recognized and cropped via the face detection frame technology. In examples of the present disclosure, it is also possible to recognize a remaining face in the image. When it is determined that the ratio of the region of a face to the entire image is too small, that is, when the face is too small, the small face is removed. When the face is small, the clarity of the small face inevitably fails to meet a requirement for being viewed, and a rendering effect of a transform video resulting from transforming such a face will be poor. Therefore, when an image is preprocessed, such a small face image is to be removed.
  • In examples of the present disclosure, a CenterFace network may be used to detect a face in the at least two images to determine whether a face image is included, whether an included face image meets a processing requirement, etc.
  • As an implementation, when two or more faces are included in the image, a face located at the center of the image or a face located with a minimum deviation from the center is taken as an effective face. The effective face is determined as a face to be processed. In examples of the present disclosure, only one face image is kept in an image by re-cropping the image including multiple faces.
  • In S12, at least two crop images are acquired by cropping the at least two images for face-containing images.
  • Specifically, feature points of a face contained in the at least two images may be determined. A first region may be determined based on the feature points for the face-containing images in the at least two images. The at least two crop images may be acquired by cropping the face-containing images based on the first region.
  • In examples of the present disclosure, after a face image in an image input by a user has been determined, feature points in the face are to be identified. In examples of the present disclosure, processing is to be focused on the feature points in the face when face transformation is performed. For a viewer, the effect of display of the feature points in the face determines the effect of display of the face. Related transformation is to be performed on the feature points in the face to render a transform video more stable and with improved transform effect.
  • In examples of the present disclosure, feature points in a face may include a front feature point such as an eye, a nose tip, a mouth corner point, an eyebrow, a cheek, etc., and may also include a contour point such as an eye, a nose, lips, an eyebrow, a cheek, etc. Of course, if the image includes that of an ear, etc., the ear and a contour point thereof may be determined as a feature point of the face.
  • The first region may be determined based on the feature points. The face-containing images may be cropped based on the first region and the size of the destined object. The crop image may be scaled to the size of the destined object.
  • In examples of the present disclosure, the first region may be determined based on the feature points as follows.
  • A circumscription frame circumscribing the feature points may be determined according to location information of the feature points. A width of the first region may be determined according to a center point of the circumscription frame and an image width boundary of the face to be processed. A height of the first region may be determined according to a preset aspect ratio and the width of the first region. The circumscription frame may be a rectangular circumscription frame, a circular circumscription frame, or a polygonal circumscription frame, etc., as long as a clear face image may be acquired. Try to locate a face at the center of the image in a non-deformable manner. The specific shape of the circumscription frame is not limited in examples of the present disclosure.
  • In examples of the present disclosure, taking a rectangular circumscription frame as an example, the rectangular circumscription frame circumscribing feature points of the face are determined according to coordinates of the feature points in the face image. The width and the height of the rectangular frame of the face are denoted by W, h, respectively. In examples of the present disclosure, a Practical Facial Landmark Detectorlink (PFLD) network may be used to locate a landmark of a crop face image to determine the feature points of the face. Coordinates of the center of the rectangular frame of the face are denoted by (xd, yc). The width and the height of the source image are wsrc and hsrc, respectively. The destined width and height of the ultimate generated video are denoted by wdst and hdst. The distances from the center of the rectangular frame of the face to the left boundary and the right boundary of the image. If the center of the rectangular frame of the face is close to the left boundary, the distance to the left boundary is maintained, and the image is cropped to acquire a width of wcrop=2×xc (corresponding to the first region). If the center of the rectangular frame of the face is close to the right boundary, the distance to the right boundary is maintained, and the image is cropped to acquire a width of wcrop=2×(wsrc−xc) (corresponding to the first region). In the foregoing example, as an example, in the coordinate system, the bottom left vertex of the rectangular frame of the face is taken as the origin.
  • If the destined aspect ratio of the output image is denoted by rdst, then
  • r dst = w dst h dst .
  • The height of the crop image (corresponding to the first region) is computed as
  • h crop = w crop r d s t .
  • As an implementation, the image may be cropped first in the height direction. That is, referring to the closest of the distances from the center to the upper boundary and the lower boundary of the image, the image is cropped at the opposite side. The to-be-processed crop image is scaled using a scaling ratio computed with the height and the height of the destined image.
  • In an implementation, if the width and the height of the first region do not meet the destined width and height, the image acquired may be cropped via the first region, and scaled to the width wdst and the height hdst.
  • It should be noted that by cropping a face image according to the above cropping rule, the face may be made to be located as close to the center of the image as possible without distorting and deforming the face, which meets popular aesthetics.
  • In S13, of the at least two crop images, triangular patch deformation is performed on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image.
  • In examples of the present disclosure, according to a first coordinate set of feature points of a face of the first image and a second coordinate set of feature points of a face of the second image, a second similarity transformation matrix and a first similarity transformation matrix between the first coordinate set and the second coordinate set may be computed.
  • A first location difference between a feature point of the first image transformed by the first similarity transformation matrix and a corresponding feature point of the second image may be determined. A second location difference between a feature point of the second image transformed by the second similarity transformation matrix and a corresponding feature point of the first image may be determined.
  • A first feature point location track from the first image to the second image may be computed according to the first location difference. A second feature point location track from the second image to the first image may be computed according to the second location difference.
  • The first triangular patch deformation frame image sequence may be acquired according to the first feature point location track. The second triangular patch deformation frame image sequence may be acquired according to the second feature point location track.
  • The first feature point location track from the first image to the second image may be computed according to the first location difference as follows.
  • The first feature point location track may be computed as:
  • s i : A B = i N × D A B + s A .
  • The si:A→B may be the first feature point location track. The N may be a number of transform frames transforming the first image into the second image. The i may be an ith frame image in transform image frames. The i may be an integer greater than 0 and less than or equal to the N. The sA may be the first coordinate set. The DA→B may be the first location difference.
  • The second feature point location track from the second image to the first image may be computed according to the second location difference as follows.
  • The second feature point location track may be computed as:
  • s i : B A = i N × D B A + s B .
  • The si:B→A may be the second feature point location track. The sB may be the second coordinate set. The DB→A may be the second location difference.
  • In examples of the present disclosure, the first image and the second image that neighbour each other may be images A and B, respectively, for example, merely to illustrate the nature of the technical solution of examples of the present disclosure, instead of limiting the technical means thereof.
  • Specifically, after the acquired images have been cropped, triangular patch deformation are performed on neighbour images in the images, to generate, for all neighbour images such as the neighbour images A and B, a triangular patch deformation frame image sequence from image A to image B and a triangular patch deformation frame image sequence from image B to image A, specifically as follows.
  • According to coordinate sets of feature points of faces of images A and B, denoted by sA={sa 1 , sa 2 , . . . , sa n } (corresponding to the first coordinate set) and sB={sb 1 , sb 2 , . . . , sb n } (corresponding to the second coordinate set), respectively, the n denoting the number of the feature points, the similarity transformation matrix between sA and sB, i.e., the similarity transformation matrix between two neighbour face images A and B, including the similarity transformation matrix from A to B (corresponding to the first similarity transformation matrix) and the similarity transformation matrix from B to A (corresponding to the second similarity transformation matrix), denoted by TA→B and TB→A, respectively, may be solved using a similarity transformation solution such as an estimateAffinePartial2D function method (as an implementation). The difference between the location of a feature point on image A, subject to similarity transformation, and the location of a corresponding feature point on image B, denoted by A→B (corresponding to the first location difference), is computed as:

  • D A→B s A ×T A→B −s B.
  • Similarly, the difference between the location of a feature point on image B, subject to similarity transformation, and the location of a corresponding feature point on image A, denoted by DB→A (corresponding to the second location difference), is computed as:

  • D B→A =s B ×T B→A −s A
  • The destined location track of the triangle change of the feature points is acquired by breaking down the difference in the location by frame. Assume that a set number of N transform image frames (corresponding to the number of transform frames) are generated between A and B. As an implementation, the N may take values 25, 30, 35, 40, etc. Then, the feature point location track (corresponding to the first feature point location track) transforming the face image A to the face image B may be computed according to DA→B, as:
  • s i : A B = i N × D A B + s A
  • The si:A→B denotes the location of the feature point of the ith frame image transforming A into B by triangular deformation. The i is an integer greater than 0 and less than or equal to the N. Similarly, the feature point location track (corresponding to the second feature point location track) transforming the face image B to the face image A may be computed according to DB→A, as:
  • s i : B A = i N × D B A + s B
  • The si:B→A denotes the location of the feature point of the ith frame image transforming B into A by triangular deformation.
  • Triangular patch partitioning is performed using a triangular patch partitioning method according to the feature point location track si:A→B transforming the image A into the image B by triangular patch deformation. In examples of the present disclosure, with the triangular patch deformation partitioning method, a delaunay triangulated graph algorithm may be used to triangulate the image. In order to ensure synchronous deformation of the background and the face, midpoints of the four sides and four vertices of the image A are respectively included in a total of N+8 points to triangulate the image A (including the face and the background). A feature point on image A is deformed to a destined location track using the triangular patch deformation method, acquiring a deformation frame image sequence from the image A to the image B, denoted by FA→B={f1, f2, . . . , fN}. The fi denotes the ith frame image acquired by performing triangle deformation from the image A.
  • Similarly, a deformation frame image sequence from the image B to the image A, denoted by HB→A={h1, h2, . . . , hN}, is acquired according to the feature point location track si:B→A transforming the image B into the image A. The hi denotes the ith frame image acquired by performing triangle deformation from the image B. The N denotes the number of frame images generated by transforming the image A into the image B or transforming the image B to the image A.
  • In S14, similarity transformation is performed on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence. Similarity transformation is performed on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence.
  • Similarity transformation may be performed on each image sequence of the first triangular patch deformation frame image sequence, acquiring the first transform frame image sequence, as follows.
  • A first similarity transformation matrix transforming the first image into the second image may be determined.
  • A step interval of a first transform parameter of the first similarity transformation matrix may be determined according to a number of transform frames transforming the first image into the second image.
  • A first similarity transformation sub-matrix for each transform frame image sequence of the first transform frame image sequence may be computed based on the first similarity transformation matrix and the step interval of the first transform parameter.
  • The first transform frame image sequence may be acquired by performing multiplication on each triangular patch deformation frame image in the first triangular patch deformation frame image sequence according to the first similarity transformation sub-matrix.
  • Similarity transformation may be performed on each image sequence of the second triangular patch deformation frame image sequence, acquiring the second transform frame image sequence, as follows.
  • A second similarity transformation sub-matrix may be acquired by performing inverse transformation on the first similarity transformation sub-matrix. The second transformation frame image sequence may be acquired by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • Alternatively, the second transform frame image sequence may be acquired as follows.
  • A second similarity transformation matrix transforming the second image into the first image may be determined.
  • A step interval of a second transform parameter of the second similarity transformation matrix may be determined according to a number of transform frames transforming the second image into the first image.
  • A second similarity transformation sub-matrix for each transform frame image sequence of the second transform frame image sequence may be computed based on the second similarity transformation matrix and the step interval of the second transform parameter.
  • The second transformation frame image sequence may be acquired by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • In examples of the present disclosure, as transformation between two images frames is to be implemented through image frames, the number of transform frames transforming the second image into the first image is the same as the number of transform frames transforming the first image into the second image.
  • The similarity transformation matrix TB→A (corresponding to the second similarity transformation matrix) transforming the image B into the image A may be expressed as:
  • T B A = [ s * r 11 s * r 12 t x s * r 21 s * r 22 t y 0 0 1 ]
  • The s denotes a scaling factor. The r11, r12, r21, and r22 denote rotation factors. The tx and the ty denote translation factors.
  • A matrix parameter is decomposed, acquiring a parameter step interval for each frame transformation. The parameter (corresponding to the second transformation parameter) may include a scaling parameter, a rotation parameter, a translation parameter, etc. The computation formula is as follows:

  • Δs =e −log(s/N)
  • The Δs denotes the step interval of the scaling parameter. The e is the constant in the logarithm operation.
  • θ = arctan ( r 21 r 11 ) Δ r = - θ N
  • The Δr denotes a parameter step interval of the scaling parameter. The θ denotes a rotation angle corresponding to the rotation parameters.
  • Δ t x = - t x N Δ t y = - t y N
  • The Δt x and the Δt y denote parameter step intervals of the translation parameters in the x direction and the y direction, respectively.
  • The similarity transformation matrix, corresponding to each frame similarity transformation transforming the image B into the image A, is solved according to the parameter step intervals of each frame transformation described above, with a specific construction as follows:
  • T i : B -> A = [ ( s × Δ s i ) × cos ( θ + Δ r × i ) ( s × Δ s ) × sin ( θ + Δ r × i ) t x + Δ t x × i - ( s × Δ s i ) × sin ( θ + Δ r × i ) ( s × Δ s ) × cos ( θ + Δ r × i ) t y + Δ t y × i 0 0 1 ]
  • The Ti:B→A denotes the similarity transformation matrix (corresponding to the second similarity transformation sub-matrix) corresponding to similarity transformation transforming the image B into the ith frame of A. The similarity transformation matrix (corresponding to the first similarity transformation sub-matrix) corresponding to similarity transformation transforming the image A into the ith frame of B may further be acquired based on Ti:B→A, as follows:

  • T i:a→B =T i:B→A ×T B→A −1
  • The Ti:A→B denotes the similarity transformation matrix (corresponding to the second similarity transformation sub-matrix) corresponding to transforming the image A into the ith frame of the image B. The TB→A −1 denotes the inverse matrix of the similarity transformation matrix TB→A transforming the image B into the A. It should be noted that the similarity transformation matrix corresponding to transforming the image B into the ith frame of the image A may be determined based on the inverse matrix of the similarity transformation matrix corresponding to transforming the image A into the ith frame of the image B. Alternatively, the similarity transformation matrix corresponding to transforming the image B into the ith frame of the image A may be computed in the mode of computing the inverse matrix of the similarity transformation matrix corresponding to transforming the image A into the ith frame of the image B, details of which is not repeated.
  • A similarity transformation frame image sequence FTA→B={ft1, ft2, . . . , ftN} is acquired by performing similarity transformation on the ith frame of the deformation frame image sequence FA→B according to Ti:A→B. The fti denotes the ith frame image acquired by transforming the image A. Similarly, a similarity transformation frame image sequence HTB→A={ht1, ht2, . . . , htN} is acquired by performing similarity transformation on the ith frame of the deformation frame image sequence HB→A according to Ti:B→A. The hti denotes the ith frame image acquired by transforming the image B.
  • In S15, the first transform frame image sequence and the second transform frame image sequence are fused, acquiring a video frame sequence corresponding to the first image and the second image.
  • The first transform frame image sequence and the second transform frame image sequence may be fused as follows.
  • A first weight and a second weight may be determined. The first weight may be a weight of an ith frame image of the first transform frame image sequence during fusion. The second weight may be a weight of an ith frame image of the second transform frame image sequence during fusion. The i may be greater than 0 and less than or equal to a number of frames of the first transform frame image sequence.
  • Each pixel of the ith frame image of the first transform frame image sequence may be multiplied by the first weight, acquiring a first to-be-fused image. The ith frame image of the second transform frame image sequence may be multiplied by the second weight, acquiring a second to-be-fused image.
  • Each pixel in the first to-be-fused image and each pixel in the second to-be-fused image may be superimposed, respectively, acquiring an ith frame fusion image corresponding to the ith frame image of the first transform frame image sequence and the ith frame image of the second transform frame image sequence.
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • Specifically, image fusion is performed on the transform frame image sequence FTA→B (corresponding to the first transform frame image sequence) from image A to image B and the transform frame image sequence HTB→A (corresponding to the second transform frame image sequence) from image B to image A, acquiring the video frame sequence, denoted by Q={q1, q2, . . . , qN} (corresponding to the video frame sequence corresponding to the first image and the second image). The qi denotes the ith frame image of the video frame sequence. Then, a fusion formula may be expressed as:
  • w i = 1 e 8 * 1 N - 4 q i = w × ft i + ( 1 - w ) × h t i
  • The wi denotes the weight of the weighted fusion. The non-linear design of the wi may make the fusion process uneven, and the generated transform video more rhythmic.
  • In S16, of the at least two crop images, a video frame sequence generated by all neighbour images are coded, acquiring a destined video.
  • In examples of the present disclosure, a video frame sequence generated by all neighbour images, such as the image sequence Q generated by the images A and B, are coded according to a set frame rate to synthesize a final face transform special-effect video.
  • The technical solution of examples of the present disclosure may be applied to various face transformation applications. A user may input a profile image per se and an image of a transform object such as a star. With the method for processing an image of examples of the present disclosure, the profile image of the user may be transformed gradually into the profile image of the star by video transformation. The user may also input any profile image. For example, the user may input a profile image from the infancy, the childhood, teenage, adulthood, . . . , till the current period of the user per se, respectively. With the method for processing an image of examples of the present disclosure, the user may view a video of the user per se gradually transforming from the infancy to the current image, by transplanting the technical solution of examples of the present disclosure to electronic equipment such as a mobile phone, a notebook computer, a game machine, a tablet computer, a personal digital assistant, a television, etc.
  • FIG. 2 is a schematic diagram of a structure of a device for processing an image according to examples of the present disclosure. As shown in FIG. 2, the device for processing an image according to examples of the present disclosure includes a unit as follows.
  • An acquiring unit 20 is configured to acquire at least two images.
  • A cropping unit 21 is configured to acquire at least two crop images by cropping the at least two images for face-containing images.
  • A triangular patch deformation unit 22 is configured to, of the at least two crop images, perform triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image.
  • A similarity transformation unit 23 is configured to perform similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence; and perform similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence.
  • A fusing unit 24 is configured to fuse the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image.
  • A video generating unit 25 is configured to, of the at least two crop images, code a video frame sequence generated by all neighbour images, acquiring a destined video.
  • As an implementation, the cropping unit 21 is further configured to:
  • determine feature points of a face contained in the at least two images;
  • determine a first region based on the feature points for the face-containing images in the at least two images; and
  • acquire the at least two crop images by cropping the face-containing images based on the first region.
  • As an implementation, the cropping unit 21 is further configured to:
  • determine a circumscription frame circumscribing the feature points according to location information of the feature points;
  • determine a width of the first region according to a center point of the circumscription frame and an image width boundary of the face to be processed; and
  • determine a height of the first region according to a preset aspect ratio and the width of the first region.
  • As an implementation, the triangular patch deformation unit 22 is further configured to:
  • compute, according to a first coordinate set of feature points of a face of the first image and a second coordinate set of feature points of a face of the second image, a second similarity transformation matrix and a first similarity transformation matrix between the first coordinate set and the second coordinate set;
  • determine a first location difference between a feature point of the first image transformed by the first similarity transformation matrix and a corresponding feature point of the second image; and determine a second location difference between a feature point of the second image transformed by the second similarity transformation matrix and a corresponding feature point of the first image;
  • compute a first feature point location track from the first image to the second image according to the first location difference; and compute a second feature point location track from the second image to the first image according to the second location difference; and
  • acquire the first triangular patch deformation frame image sequence according to the first feature point location track, and acquire the second triangular patch deformation frame image sequence according to the second feature point location track.
  • As an implementation, the triangular patch deformation unit 22 is further configured to:
  • compute the first feature point location track as:
  • s i : A -> B = 1 N × D A -> B + s A .
  • The si:A→B is the first feature point location track. The N is a number of transform frames transforming the first image into the second image. The i is an ith frame image in transform image frames. The i is an integer greater than 0 and less than or equal to the N. The sA is the first coordinate set. The DA→B is the first location difference,
  • The triangular patch deformation unit is further configured to compue the second feature point location track as:
  • s i : B -> A = 1 N × D B -> A + s B .
  • The si:B→A is the second feature point location track. The sB is the second coordinate set. The DB→A is the second location difference.
  • As an implementation, the similarity transformation unit 23 is further configured to:
  • determine a first similarity transformation matrix transforming the first image into the second image;
  • determine a step interval of a first transform parameter of the first similarity transformation matrix according to a number of transform frames transforming the first image into the second image;
  • compute a first similarity transformation sub-matrix for each transform frame image sequence of the first transform frame image sequence based on the first similarity transformation matrix and the step interval of the first transform parameter; and
  • acquire the first transform frame image sequence by performing multiplication on each triangular patch deformation frame image in the first triangular patch deformation frame image sequence according to the first similarity transformation sub-matrix.
  • The similarity transformation unit 23 may be further configured to:
  • acquire a second similarity transformation sub-matrix by performing inverse transformation on the first similarity transformation sub-matrix; and acquire the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • Alternatively, the similarity transformation unit may be further configured to:
  • determine a second similarity transformation matrix transforming the second image into the first image; determine a step interval of a second transform parameter of the second similarity transformation matrix according to a number of transform frames transforming the second image into the first image; compute a second similarity transformation sub-matrix for each transform frame image sequence of the second transform frame image sequence based on the second similarity transformation matrix and the step interval of the second transform parameter; and acquire the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • As an implementation, the fusing unit 24 is further configured to:
  • determine a first weight and a second weight, the first weight being a weight of an ith frame image of the first transform frame image sequence during fusion, the second weight being a weight of an ith frame image of the second transform frame image sequence during fusion, the i being greater than 0 and less than or equal to a number of frames of the first transform frame image sequence;
  • multiply each pixel of the ith frame image of the first transform frame image sequence by the first weight, acquiring a first to-be-fused image; multiply the ith frame image of the second transform frame image sequence by the second weight, acquiring a second to-be-fused image; and
  • respectively superimpose each pixel in the first to-be-fused image and each pixel in the second to-be-fused image, acquiring an ith frame fusion image corresponding to the ith frame image of the first transform frame image sequence and the ith frame image of the second transform frame image sequence.
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • As an implementation, the feature points include at least one of:
  • an eye, a nose tip, a mouth corner point, an eyebrow, a cheek, and a contour point of an eye, a nose, lips, an eyebrow, and a cheek.
  • In an example, the determining unit 20, the cropping unit 21, the triangular patch deformation unit 22, the similarity transformation unit 23, the fusing unit 24, and the video generating unit 25, etc., may be implemented by one or more Central Processing Units (CPU), Graphics Processing Units (GPU), base processors (BP), Application Specific Integrated Circuits (ASIC), DSPs, Programmable Logic Devices (PLD), Complex Programmable Logic Devices (CPLD), Field-Programmable Gate Arrays (FPGA), general purpose processors, controllers, Micro Controller Units (MCU), Microprocessors, or other electronic components, or may be implemented in conjunction with one or more radio frequency (RF) antennas, for performing the foregoing text processing device.
  • A module as well as unit of the device for processing an image according to an aforementioned example herein may perform an operation in a mode elaborated in an aforementioned example of the device herein, which will not be repeated here.
  • FIG. 3 is a block diagram of electronic equipment 800 according to an illustrative example. As shown in FIG. 3, the electronic equipment 800 supports multi-screen output. The electronic equipment 800 may include one or more components as follows: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an Input/Output (I/O) interface 812, a sensor component 814, and a communication component 816.
  • The processing component 802 generally controls an overall operation of the display equipment, such as operations associated with display, a telephone call, data communication, a camera operation, a recording operation, etc. The processing component 802 may include one or more processors 820 to execute instructions so as to complete all or some steps of the method. In addition, the processing component 802 may include one or more modules to facilitate interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
  • The memory 804 is configured to store various types of data to support operation on the electronic equipment 800. Examples of these data include instructions of any application or method configured to operate on the electronic equipment 800, contact data, phonebook data, messages, images, videos, and/or the like. The memory 804 may be realized by any type of volatile or non-volatile storage equipment or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic memory, flash memory, magnetic disk, or compact disk.
  • The power component 806 supplies electric power to various components of the electronic equipment 800. The power component 806 may include a power management system, one or more power supplies, and other components related to generating, managing and distributing electric power for the electronic equipment 800.
  • The multimedia component 808 includes a screen providing an output interface between the electronic equipment 800 and a user. The screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a TP, the screen may be realized as a touch screen to receive an input signal from a user. The TP includes one or more touch sensors for sensing touch, slide and gestures on the TP. The touch sensors not only may sense the boundary of a touch or slide move, but also detect the duration and pressure related to the touch or slide move. In some examples, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic equipment 800 is in an operation mode such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front camera and/or the rear camera may be a fixed optical lens system or may have a focal length and be capable of optical zooming.
  • The audio component 810 is configured to output and/or input an audio signal. For example, the audio component 810 includes a microphone (MIC). When the electronic equipment 800 is in an operation mode such as a call mode, a recording mode, and a voice recognition mode, the MIC is configured to receive an external audio signal. The received audio signal may be further stored in the memory 804 or may be sent via the communication component 816. In some examples, the audio component 810 further includes a loudspeaker configured to output the audio signal.
  • The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module. The peripheral interface module may be a keypad, a click wheel, a button, etc. These buttons may include but are not limited to: a homepage button, a volume button, a start button, and a lock button.
  • The sensor component 814 includes one or more sensors for assessing various states of the electronic equipment 800. For example, the sensor component 814 may detect an on/off state of the electronic equipment 800 and relative locationing of components such as the display and the keypad of the electronic equipment 800. The sensor component 814 may further detect a change in the location of the electronic equipment 800 or of a component of the electronic equipment 800, whether there is contact between the electronic equipment 800 and a user, the orientation or acceleration/deceleration of the electronic equipment 800, and a change in the temperature of the electronic equipment 800. The sensor component 814 may include a proximity sensor configured to detect existence of a nearby object without physical contact. The sensor component 814 may further include an optical sensor such as a Complementary Metal-Oxide-Semiconductor (CMOS) or Charge-Coupled-Device (CCD) image sensor used in an imaging application. In some examples, the sensor component 814 may further include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • The communication component 816 is configured to facilitate wired or wireless/radio communication between the electronic equipment 800 and other equipment. The electronic equipment 800 may access a radio network based on a communication standard such as WiFi, 2G, 3G, . . . , or a combination thereof. In an illustrative example, the communication component 816 broadcasts related information or receives a broadcast signal from an external broadcast management system via a broadcast channel. In an illustrative example, the communication component 816 further includes a Near Field Communication (NFC) module for short-range communication. For example, the NFC module may be realized based on Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra-WideBand (UWB) technology, BlueTooth (BT) technology, and other technologies.
  • In an illustrative example, the electronic equipment 800 may be realized by one or more of Application Specific Integrated Circuits (ASIC), Digital Signal Processors (DSP), Digital Signal Processing Device (DSPD), Programmable Logic Devices (PLD), Field Programmable Gate Arrays (FPGA), controllers, microcontrollers, microprocessors or other electronic components, to implement the method.
  • In an illustrative example, a non-transitory computer-readable storage medium including instructions, such as the memory 804 including instructions, is further provided. The instructions may be executed by the processor 820 of the electronic equipment 800 to implement a step of the method for processing an image of an example herein. For example, the non-transitory computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, optical data storage equipment, etc.
  • Examples of the present disclosure further disclose a non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor of electronic equipment, allow the electronic equipment to implement a control method. The method includes:
  • acquiring at least two images;
  • acquiring at least two crop images by cropping the at least two images for face-containing images;
  • of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image;
  • performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence;
  • performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence;
  • fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and
  • of the at least two crop images, coding a video frame sequence generated by all neighbour images, acquiring a destined video.
  • Optionally, acquiring the at least two crop images by cropping the at least two images for the face-containing images includes:
  • determining feature points of a face contained in the at least two images;
  • determining a first region based on the feature points for the face-containing images in the at least two images; and
  • acquiring the at least two crop images by cropping the face-containing images based on the first region.
  • Optionally, determining the first region based on the feature points includes:
  • determining a circumscription frame circumscribing the feature points according to location information of the feature points;
  • determining a width of the first region according to a center point of the circumscription frame and an image width boundary of the face to be processed; and
  • determining a height of the first region according to a preset aspect ratio and the width of the first region.
  • Optionally, generating the first triangular patch deformation frame image sequence from the first image to the second image, and the second triangular patch deformation frame image sequence from the second image to the first image includes:
  • computing, according to a first coordinate set of feature points of a face of the first image and a second coordinate set of feature points of a face of the second image, a second similarity transformation matrix and a first similarity transformation matrix between the first coordinate set and the second coordinate set;
  • determining a first location difference between a feature point of the first image transformed by the first similarity transformation matrix and a corresponding feature point of the second image; determining a second location difference between a feature point of the second image transformed by the second similarity transformation matrix and a corresponding feature point of the first image;
  • computing a first feature point location track from the first image to the second image according to the first location difference; computing a second feature point location track from the second image to the first image according to the second location difference;
  • acquiring the first triangular patch deformation frame image sequence according to the first feature point location track, and acquiring the second triangular patch deformation frame image sequence according to the second feature point location track.
  • Optionally, computing the first feature point location track from the first image to the second image according to the first location difference includes:
  • computing the first feature point location track as:
  • s i : A -> B = 1 N × D A -> B + s A .
  • The si:A→B is the first feature point location track, the N is a number of transform frames transforming the first image into the second image, the i is an ith frame image in transform image frames, the i is an integer greater than 0 and less than or equal to the N, the sA is the first coordinate set, and the DA→B is the first location difference.
  • Computing the second feature point location track from the second image to the first image according to the second location difference may include:
  • computing the second feature point location track as:
  • s i : B -> A = 1 N × D B -> A + s B .
  • The si:B→A is the second feature point location track, the sB is the second coordinate set, and the DB→A is the second location difference.
  • Optionally, performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring the first transform frame image sequence includes:
  • determining a first similarity transformation matrix transforming the first image into the second image;
  • determining a step interval of a first transform parameter of the first similarity transformation matrix according to a number of transform frames transforming the first image into the second image;
  • computing a first similarity transformation sub-matrix for each transform frame image sequence of the first transform frame image sequence based on the first similarity transformation matrix and the step interval of the first transform parameter; and
  • acquiring the first transform frame image sequence by performing multiplication on each triangular patch deformation frame image in the first triangular patch deformation frame image sequence according to the first similarity transformation sub-matrix.
  • Additionally or alternatively, performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring the second transform frame image sequence includes:
  • determining a second similarity transformation matrix transforming the second image into the first image;
  • determining a step interval of a second transform parameter of the second similarity transformation matrix according to a number of transform frames transforming the second image into the first image;
  • computing a second similarity transformation sub-matrix for each transform frame image sequence of the second transform frame image sequence based on the second similarity transformation matrix and the step interval of the second transform parameter; and
  • acquiring the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • Optionally, fusing the first transform frame image sequence and the second transform frame image sequence, acquiring the video frame sequence corresponding to the first image and the second image includes:
  • determining a first weight and a second weight, the first weight being a weight of an ith frame image of the first transform frame image sequence during fusion, the second weight being a weight of an ith frame image of the second transform frame image sequence during fusion, the i being greater than 0 and less than or equal to a number of frames of the first transform frame image sequence;
  • multiplying each pixel of the ith frame image of the first transform frame image sequence by the first weight, acquiring a first to-be-fused image; multiplying the ith frame image of the second transform frame image sequence by the second weight, acquiring a second to-be-fused image; and
  • respectively superimposing each pixel in the first to-be-fused image and each pixel in the second to-be-fused image, acquiring an ith frame fusion image corresponding to the ith frame image of the first transform frame image sequence and the ith frame image of the second transform frame image sequence.
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • Optionally, the feature points include at least one of:
  • an eye, a nose tip, a mouth corner point, an eyebrow, a cheek, and a contour point of an eye, a nose, lips, an eyebrow, and a cheek.
  • Further note that although in drawings herein operations are described in a specific order, it should not be construed as that the operations have to be performed in the specific order or sequence, or that any operation shown has to be performed in order to acquire an expected result. Under a specific circumstance, multitask and parallel processing may be advantageous.
  • Other implementations of the present disclosure will be apparent to a person having ordinary skill in the art that has deemed the specification and practiced the present disclosure. The present disclosure is intended to cover any variation, use, or adaptation of the present disclosure following the general principles of the present disclosure and including such departures from the present disclosure as come within common knowledge or customary practice in the art. The specification and the examples are intended to be illustrative only.
  • It should be understood that the present disclosure is not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes may be made to the present disclosure without departing from the scope of the present disclosure.
  • According to a first aspect of examples of the present disclosure, there is provided a method for processing an image, including:
  • acquiring at least two images;
  • acquiring at least two crop images by cropping the at least two images for face-containing images;
  • of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image;
  • performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence;
  • performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence;
  • fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and
  • of the at least two crop images, coding a video frame sequence generated by all neighbour images, acquiring a destined video.
  • Optionally, acquiring the at least two crop images by cropping the at least two images for the face-containing images includes:
  • determining feature points of a face contained in the at least two images;
  • determining a first region based on the feature points for the face-containing images in the at least two images; and
  • acquiring the at least two crop images by cropping the face-containing images based on the first region.
  • Optionally, determining the first region based on the feature points includes:
  • determining a circumscription frame circumscribing the feature points according to location information of the feature points;
  • determining a width of the first region according to a center point of the circumscription frame and an image width boundary of the face to be processed; and
  • determining a height of the first region according to a preset aspect ratio and the width of the first region.
  • Optionally, generating the first triangular patch deformation frame image sequence from the first image to the second image, and the second triangular patch deformation frame image sequence from the second image to the first image includes:
  • computing, according to a first coordinate set of feature points of a face of the first image and a second coordinate set of feature points of a face of the second image, a second similarity transformation matrix and a first similarity transformation matrix between the first coordinate set and the second coordinate set;
  • determining a first location difference between a feature point of the first image transformed by the first similarity transformation matrix and a corresponding feature point of the second image; determining a second location difference between a feature point of the second image transformed by the second similarity transformation matrix and a corresponding feature point of the first image;
  • computing a first feature point location track from the first image to the second image according to the first location difference; computing a second feature point location track from the second image to the first image according to the second location difference;
  • acquiring the first triangular patch deformation frame image sequence according to the first feature point location track, and acquiring the second triangular patch deformation frame image sequence according to the second feature point location track.
  • Optionally, computing the first feature point location track from the first image to the second image according to the first location difference includes:
  • computing the first feature point location track as:
  • s i : A -> B = 1 N × D A -> B + s A .
  • The si:A→B is the first feature point location track, the N is a number of transform frames transforming the first image into the second image, the i is an ith frame image in transform image frames, the i is an integer greater than 0 and less than or equal to the N, the sA is the first coordinate set, and the DA→B is the first location difference.
  • Computing the second feature point location track from the second image to the first image according to the second location difference may include:
  • computing the second feature point location track as:
      • the second feature point location track
  • s i : B -> A = 1 N × D B -> A + s B .
  • The si:B→A is the second feature point location track, the sB is the second coordinate set, and the DB→A is the second location difference.
  • Optionally, performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring the first transform frame image sequence includes:
  • determining a first similarity transformation matrix transforming the first image into the second image;
  • determining a step interval of a first transform parameter of the first similarity transformation matrix according to a number of transform frames transforming the first image into the second image;
  • computing a first similarity transformation sub-matrix for each transform frame image sequence of the first transform frame image sequence based on the first similarity transformation matrix and the step interval of the first transform parameter; and
  • acquiring the first transform frame image sequence by performing multiplication on each triangular patch deformation frame image in the first triangular patch deformation frame image sequence according to the first similarity transformation sub-matrix.
  • Optionally, performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring the second transform frame image sequence includes:
  • acquiring a second similarity transformation sub-matrix by performing inverse transformation on the first similarity transformation sub-matrix; and acquiring the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix, or
  • determining a second similarity transformation matrix transforming the second image into the first image;
  • determining a step interval of a second transform parameter of the second similarity transformation matrix according to a number of transform frames transforming the second image into the first image;
  • computing a second similarity transformation sub-matrix for each transform frame image sequence of the second transform frame image sequence based on the second similarity transformation matrix and the step interval of the second transform parameter; and
  • acquiring the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • Optionally, fusing the first transform frame image sequence and the second transform frame image sequence, acquiring the video frame sequence corresponding to the first image and the second image includes:
  • determining a first weight and a second weight, the first weight being a weight of an ith frame image of the first transform frame image sequence during fusion, the second weight being a weight of an ith frame image of the second transform frame image sequence during fusion, the i being greater than 0 and less than or equal to a number of frames of the first transform frame image sequence;
  • multiplying each pixel of the ith frame image of the first transform frame image sequence by the first weight, acquiring a first to-be-fused image; multiplying the ith frame image of the second transform frame image sequence by the second weight, acquiring a second to-be-fused image; and
  • respectively superimposing each pixel in the first to-be-fused image and each pixel in the second to-be-fused image, acquiring an ith frame fusion image corresponding to the ith frame image of the first transform frame image sequence and the ith frame image of the second transform frame image sequence.
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • According to a second aspect of examples of the present disclosure, there is provided a device for processing an image, including:
  • an acquiring unit configured to acquire at least two images;
  • a cropping unit configured to acquire at least two crop images by cropping the at least two images for face-containing images;
  • a triangular patch deformation unit configured to, of the at least two crop images, perform triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image;
  • a similarity transformation unit configured to perform similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence; and perform similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence;
  • a fusing unit configured to fuse the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and
  • a video generating unit configured to, of the at least two crop images, code a video frame sequence generated by all neighbour images, acquiring a destined video.
  • Optionally, the cropping unit is further configured to:
  • determine feature points of a face contained in the at least two images;
  • determine a first region based on the feature points for the face-containing images in the at least two images; and
  • acquire the at least two crop images by cropping the face-containing images based on the first region.
  • Optionally, the cropping unit is further configured to:
  • determine a circumscription frame circumscribing the feature points according to location information of the feature points;
  • determine a width of the first region according to a center point of the circumscription frame and an image width boundary of the face to be processed; and
  • determine a height of the first region according to a preset aspect ratio and the width of the first region.
  • Optionally, the triangular patch deformation unit is further configured to:
  • compute, according to a first coordinate set of feature points of a face of the first image and a second coordinate set of feature points of a face of the second image, a second similarity transformation matrix and a first similarity transformation matrix between the first coordinate set and the second coordinate set;
  • determine a first location difference between a feature point of the first image transformed by the first similarity transformation matrix and a corresponding feature point of the second image; and determine a second location difference between a feature point of the second image transformed by the second similarity transformation matrix and a corresponding feature point of the first image;
  • compute a first feature point location track from the first image to the second image according to the first location difference; and compute a second feature point location track from the second image to the first image according to the second location difference; and
  • acquire the first triangular patch deformation frame image sequence according to the first feature point location track, and acquire the second triangular patch deformation frame image sequence according to the second feature point location track.
  • Optionally, the similarity transformation unit is further configured to:
  • determine a first similarity transformation matrix transforming the first image into the second image;
  • determine a step interval of a first transform parameter of the first similarity transformation matrix according to a number of transform frames transforming the first image into the second image;
  • compute a first similarity transformation sub-matrix for each transform frame image sequence of the first transform frame image sequence based on the first similarity transformation matrix and the step interval of the first transform parameter; and
  • acquire the first transform frame image sequence by performing multiplication on each triangular patch deformation frame image in the first triangular patch deformation frame image sequence according to the first similarity transformation sub-matrix.
  • Optionally, the similarity transformation unit is further configured to:
  • acquire a second similarity transformation sub-matrix by performing inverse transformation on the first similarity transformation sub-matrix; and acquire the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix, or
  • determine a second similarity transformation matrix transforming the second image into the first image; determine a step interval of a second transform parameter of the second similarity transformation matrix according to a number of transform frames transforming the second image into the first image; compute a second similarity transformation sub-matrix for each transform frame image sequence of the second transform frame image sequence based on the second similarity transformation matrix and the step interval of the second transform parameter; and acquire the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
  • Optionally, the fusing unit is further configured to:
  • determine a first weight and a second weight, the first weight being a weight of an ith frame image of the first transform frame image sequence during fusion, the second weight being a weight of an ith frame image of the second transform frame image sequence during fusion, the i being greater than 0 and less than or equal to a number of frames of the first transform frame image sequence;
  • multiply each pixel of the ith frame image of the first transform frame image sequence by the first weight, acquiring a first to-be-fused image; multiply the ith frame image of the second transform frame image sequence by the second weight, acquiring a second to-be-fused image; and
  • respectively superimpose each pixel in the first to-be-fused image and each pixel in the second to-be-fused image, acquiring an ith frame fusion image corresponding to the ith frame image of the first transform frame image sequence and the ith frame image of the second transform frame image sequence.
  • All fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence may form the video frame sequence.
  • According to a third aspect of examples of the present disclosure, there is provided electronic equipment including a processor and a memory for storing processor executable instructions. The processor is configured to implement a step of the method for processing an image by calling the executable instructions in the memory.
  • According to a fourth aspect of examples of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor of electronic equipment, allow the electronic equipment to implement a step of the method for processing an image.
  • A technical solution provided by examples of the present disclosure may include beneficial effects as follows.
  • In examples of the present disclosure, a face of a portrait to be transformed is cropped. Then, triangular patch deformation is performed. Similarity transformation is performed on each image sequence in the deformation frame image sequence, avoiding an error and a jitter of an image transform frame, improving quality of the face transform video, improving face transform stability, improving user experience greatly.
  • The present disclosure may include dedicated hardware implementations such as application specific integrated circuits, programmable logic arrays and other hardware devices. The hardware implementations can be constructed to implement one or more of the methods described herein. Examples that may include the apparatus and systems of various implementations can broadly include a variety of electronic and computing systems. One or more examples described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the system disclosed may encompass software, firmware, and hardware implementations. The terms “module,” “sub-module,” “circuit,” “sub-circuit,” “circuitry,” “sub-circuitry,” “unit,” or “sub-unit” may include memory (shared, dedicated, or group) that stores code or instructions that can be executed by one or more processors. The module refers herein may include one or more circuit with or without stored code or instructions. The module or circuit may include one or more components that are connected.

Claims (20)

What is claimed is:
1. A method for processing an image, comprising:
acquiring at least two images;
acquiring at least two crop images by cropping the at least two images for face-containing images;
of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image;
performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence;
performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence;
fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and
of the at least two crop images, coding a video frame sequence generated by all neighbour images, acquiring a destined video.
2. The method of claim 1, wherein acquiring the at least two crop images by cropping the at least two images for the face-containing images comprises:
determining feature points of a face contained in the at least two images;
determining a first region based on the feature points for the face-containing images in the at least two images; and
acquiring the at least two crop images by cropping the face-containing images based on the first region.
3. The method of claim 2, wherein determining the first region based on the feature points comprises:
determining a circumscription frame circumscribing the feature points according to location information of the feature points;
determining a width of the first region according to a center point of the circumscription frame and an image width boundary of the face to be processed; and
determining a height of the first region according to a preset aspect ratio and the width of the first region.
4. The method of claim 1, wherein generating the first triangular patch deformation frame image sequence from the first image to the second image, and the second triangular patch deformation frame image sequence from the second image to the first image comprises:
computing, according to a first coordinate set of feature points of a face of the first image and a second coordinate set of feature points of a face of the second image, a second similarity transformation matrix and a first similarity transformation matrix between the first coordinate set and the second coordinate set;
determining a first location difference between a feature point of the first image transformed by the first similarity transformation matrix and a corresponding feature point of the second image; determining a second location difference between a feature point of the second image transformed by the second similarity transformation matrix and a corresponding feature point of the first image;
computing a first feature point location track from the first image to the second image according to the first location difference; computing a second feature point location track from the second image to the first image according to the second location difference; and
acquiring the first triangular patch deformation frame image sequence according to the first feature point location track, and acquiring the second triangular patch deformation frame image sequence according to the second feature point location track.
5. The method of claim 4, wherein:
computing the first feature point location track from the first image to the second image according to the first location difference comprises:
computing the first feature point location track as:
s i : A -> B = 1 N × D A -> B + s A ,
wherein the si:A→B is the first feature point location track, the N is a number of transform frames transforming the first image into the second image, the i is an ith frame image in transform image frames, the i is an integer greater than 0 and less than or equal to the N, the sA is the first coordinate set, and the DA→B is the first location difference, and
computing the second feature point location track from the second image to the first image according to the second location difference comprises:
computing the second feature point location track as:
s i : B -> A = 1 N × D B -> A + s B ,
wherein the si:B→A is the second feature point location track, the sB is the second coordinate set, and the DB→A is the second location difference.
6. The method of claim 1, wherein performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring the first transform frame image sequence comprises:
determining a first similarity transformation matrix transforming the first image into the second image;
determining a step interval of a first transform parameter of the first similarity transformation matrix according to a number of transform frames transforming the first image into the second image;
computing a first similarity transformation sub-matrix for each transform frame image sequence of the first transform frame image sequence based on the first similarity transformation matrix and the step interval of the first transform parameter; and
acquiring the first transform frame image sequence by performing multiplication on each triangular patch deformation frame image in the first triangular patch deformation frame image sequence according to the first similarity transformation sub-matrix.
7. The method of claim 6, wherein performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring the second transform frame image sequence comprises:
acquiring a second similarity transformation sub-matrix by performing inverse transformation on the first similarity transformation sub-matrix; and acquiring the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
8. The method of claim 1, wherein performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring the second transform frame image sequence comprises:
determining a second similarity transformation matrix transforming the second image into the first image;
determining a step interval of a second transform parameter of the second similarity transformation matrix according to a number of transform frames transforming the second image into the first image;
computing a second similarity transformation sub-matrix for each transform frame image sequence of the second transform frame image sequence based on the second similarity transformation matrix and the step interval of the second transform parameter; and
acquiring the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
9. The method of claim 1, wherein fusing the first transform frame image sequence and the second transform frame image sequence, acquiring the video frame sequence corresponding to the first image and the second image comprises:
determining a first weight and a second weight, wherein the first weight is a weight of an ith frame image of the first transform frame image sequence during fusion, and the second weight is a weight of an ith frame image of the second transform frame image sequence during fusion, wherein the i is greater than 0 and less than or equal to a number of frames of the first transform frame image sequence;
multiplying each pixel of the ith frame image of the first transform frame image sequence by the first weight, acquiring a first to-be-fused image; multiplying the ith frame image of the second transform frame image sequence by the second weight, acquiring a second to-be-fused image; and
respectively superimposing each pixel in the first to-be-fused image and each pixel in the second to-be-fused image, acquiring an ith frame fusion image corresponding to the ith frame image of the first transform frame image sequence and the ith frame image of the second transform frame image sequence, and
wherein all fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence form the video frame sequence.
10. Electronic equipment comprising a processor and a memory for storing processor executable instructions, wherein the processor is configured, by calling the executable instructions in the memory, to implement:
acquiring at least two images;
acquiring at least two crop images by cropping the at least two images for face-containing images;
of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image;
performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence;
performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence;
fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and
of the at least two crop images, coding a video frame sequence generated by all neighbour images, acquiring a destined video.
11. The electronic equipment of claim 10, wherein the processor configured to implement acquiring the at least two crop images by cropping the at least two images for the face-containing is further configured to implement:
determining feature points of a face contained in the at least two images;
determining a first region based on the feature points for the face-containing images in the at least two images; and
acquiring the at least two crop images by cropping the face-containing images based on the first region.
12. The electronic equipment of claim 11, wherein the processor configured to implement determining the first region based on the feature points is further configured to implement:
determining a circumscription frame circumscribing the feature points according to location information of the feature points;
determining a width of the first region according to a center point of the circumscription frame and an image width boundary of the face to be processed; and
determining a height of the first region according to a preset aspect ratio and the width of the first region.
13. The electronic equipment of claim 10, wherein the processor configured to implement generating the first triangular patch deformation frame image sequence from the first image to the second image, and the second triangular patch deformation frame image sequence from the second image to the first image, is further configured to implement:
computing, according to a first coordinate set of feature points of a face of the first image and a second coordinate set of feature points of a face of the second image, a second similarity transformation matrix and a first similarity transformation matrix between the first coordinate set and the second coordinate set;
determining a first location difference between a feature point of the first image transformed by the first similarity transformation matrix and a corresponding feature point of the second image; determining a second location difference between a feature point of the second image transformed by the second similarity transformation matrix and a corresponding feature point of the first image;
computing a first feature point location track from the first image to the second image according to the first location difference; computing a second feature point location track from the second image to the first image according to the second location difference; and
acquiring the first triangular patch deformation frame image sequence according to the first feature point location track, and acquiring the second triangular patch deformation frame image sequence according to the second feature point location track.
14. The electronic equipment of claim 13, wherein:
the processor configured to implement computing the first feature point location track from the first image to the second image according to the first location difference is further configured to implement:
computing the first feature point location track as:
s i : A -> B = 1 N × D A -> B + s A ,
wherein the si:A→B is the first feature point location track, the N is a number of transform frames transforming the first image into the second image, the i is an ith frame image in transform image frames, the i is an integer greater than 0 and less than or equal to the N, the sA is the first coordinate set, and the DA→B is the first location difference, and
the processor configured to implement computing the second feature point location track from the second image to the first image according to the second location difference is further configured to implement:
computing the second feature point location track as:
s i : B -> A = 1 N × D B -> A + s B ,
wherein the si:B→A is the second feature point location track, the sB is the second coordinate set, and the DB→A is the second location difference.
15. The electronic equipment of claim 10, wherein the processor configured to implement performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring the first transform frame image sequence, is further configured to implement:
determining a first similarity transformation matrix transforming the first image into the second image;
determining a step interval of a first transform parameter of the first similarity transformation matrix according to a number of transform frames transforming the first image into the second image;
computing a first similarity transformation sub-matrix for each transform frame image sequence of the first transform frame image sequence based on the first similarity transformation matrix and the step interval of the first transform parameter; and
acquiring the first transform frame image sequence by performing multiplication on each triangular patch deformation frame image in the first triangular patch deformation frame image sequence according to the first similarity transformation sub-matrix.
16. The electronic equipment of claim 15, wherein the processor configured to implement performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring the second transform frame image sequence, is further configured to implement:
acquiring a second similarity transformation sub-matrix by performing inverse transformation on the first similarity transformation sub-matrix; and acquiring the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
17. The electronic equipment of claim 10, wherein the processor configured to implement performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring the second transform frame image sequence, is further configured to implement:
determining a second similarity transformation matrix transforming the second image into the first image;
determining a step interval of a second transform parameter of the second similarity transformation matrix according to a number of transform frames transforming the second image into the first image;
computing a second similarity transformation sub-matrix for each transform frame image sequence of the second transform frame image sequence based on the second similarity transformation matrix and the step interval of the second transform parameter; and
acquiring the second transformation frame image sequence by performing multiplication on each triangular patch deformation frame image in the second triangular patch deformation frame image sequence according to the second similarity transformation sub-matrix.
18. The electronic equipment of claim 10, wherein the processor configured to implement fusing the first transform frame image sequence and the second transform frame image sequence, acquiring the video frame sequence corresponding to the first image and the second image, is further configured to implement:
determining a first weight and a second weight, wherein the first weight is a weight of an ith frame image of the first transform frame image sequence during fusion, and the second weight is a weight of an ith frame image of the second transform frame image sequence during fusion, wherein the i is greater than 0 and less than or equal to a number of frames of the first transform frame image sequence;
multiplying each pixel of the ith frame image of the first transform frame image sequence by the first weight, acquiring a first to-be-fused image; multiplying the ith frame image of the second transform frame image sequence by the second weight, acquiring a second to-be-fused image; and
respectively superimposing each pixel in the first to-be-fused image and each pixel in the second to-be-fused image, acquiring an ith frame fusion image corresponding to the ith frame image of the first transform frame image sequence and the ith frame image of the second transform frame image sequence, and
wherein all fusion images corresponding to the first transform frame image sequence and the second transform frame image sequence form the video frame sequence.
19. A non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor of electronic equipment, cause the electronic equipment to implement:
acquiring at least two images;
acquiring at least two crop images by cropping the at least two images for face-containing images;
of the at least two crop images, performing triangular patch deformation on a first image and a second image that neighbour each other, generating a first triangular patch deformation frame image sequence from the first image to the second image, and a second triangular patch deformation frame image sequence from the second image to the first image;
performing similarity transformation on each image sequence of the first triangular patch deformation frame image sequence, acquiring a first transform frame image sequence;
performing similarity transformation on each image sequence of the second triangular patch deformation frame image sequence, acquiring a second transform frame image sequence;
fusing the first transform frame image sequence and the second transform frame image sequence, acquiring a video frame sequence corresponding to the first image and the second image; and
of the at least two crop images, coding a video frame sequence generated by all neighbour images, acquiring a destined video.
20. The storage medium of claim 19, wherein the instructions caused the processor to implement acquiring the at least two crop images by cropping the at least two images for the face-containing images further cause the processor to implement:
determining feature points of a face contained in the at least two images;
determining a first region based on the feature points for the face-containing images in the at least two images; and
acquiring the at least two crop images by cropping the face-containing images based on the first region.
US17/334,926 2020-11-20 2021-05-31 Method for processing image, electronic equipment, and storage medium Active 2041-07-09 US11532069B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011314245.3 2020-11-20
CN202011314245.3A CN112508773B (en) 2020-11-20 2020-11-20 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
US20220164920A1 true US20220164920A1 (en) 2022-05-26
US11532069B2 US11532069B2 (en) 2022-12-20

Family

ID=74958154

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/334,926 Active 2041-07-09 US11532069B2 (en) 2020-11-20 2021-05-31 Method for processing image, electronic equipment, and storage medium

Country Status (3)

Country Link
US (1) US11532069B2 (en)
EP (1) EP4002261A1 (en)
CN (1) CN112508773B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116386074A (en) * 2023-06-07 2023-07-04 青岛雅筑景观设计有限公司 Intelligent processing and management system for garden engineering design data
CN117714903A (en) * 2024-02-06 2024-03-15 成都唐米科技有限公司 Video synthesis method and device based on follow-up shooting and electronic equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113961746B (en) * 2021-09-29 2023-11-21 北京百度网讯科技有限公司 Video generation method, device, electronic equipment and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016148A (en) * 1997-06-06 2000-01-18 Digital Equipment Corporation Automated mapping of facial images to animation wireframes topologies
US6181806B1 (en) * 1993-03-29 2001-01-30 Matsushita Electric Industrial Co., Ltd. Apparatus for identifying a person using facial features
US20100189361A1 (en) * 2009-01-28 2010-07-29 Seiko Epson Corporation Image processing apparatus for detecting coordinate positions of characteristic portions of face
US20100302643A1 (en) * 2007-05-09 2010-12-02 Felix Rodriguez Larreta Image-producing apparatus
US8290278B2 (en) * 2009-02-10 2012-10-16 Seiko Epson Corporation Specifying position of characteristic portion of face image
US20130079911A1 (en) * 2011-09-27 2013-03-28 University Of Science And Technology Of China Method and device for generating morphing animation
US20180084260A1 (en) * 2016-09-16 2018-03-22 Qualcomm Incorporated Offset vector identification of temporal motion vector predictor
US20210365707A1 (en) * 2020-05-20 2021-11-25 Qualcomm Incorporated Maintaining fixed sizes for target objects in frames
US11423556B2 (en) * 2016-12-06 2022-08-23 Activision Publishing, Inc. Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
US11421905B2 (en) * 2018-06-14 2022-08-23 Panasonic Intellectual Property Management Co., Ltd. Information processing method, recording medium, and information processing system

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893920B (en) * 2015-01-26 2019-12-27 阿里巴巴集团控股有限公司 Face living body detection method and device
CN104766084B (en) * 2015-04-10 2017-12-05 南京大学 A kind of nearly copy image detection method of multiple target matching
CN106846317B (en) * 2017-02-27 2021-09-17 北京连心医疗科技有限公司 Medical image retrieval method based on feature extraction and similarity matching
CN107067370A (en) * 2017-04-12 2017-08-18 长沙全度影像科技有限公司 A kind of image split-joint method based on distortion of the mesh
CN109241810B (en) * 2017-07-10 2022-01-28 腾讯科技(深圳)有限公司 Virtual character image construction method and device and storage medium
JP7244488B2 (en) * 2018-03-15 2023-03-22 株式会社村上開明堂 Composite video creation device, composite video creation method, and composite video creation program
JP6931267B2 (en) * 2018-06-21 2021-09-01 Kddi株式会社 A program, device and method for generating a display image obtained by transforming the original image based on the target image.
CN109035145B (en) * 2018-08-02 2022-11-18 广州市鑫广飞信息科技有限公司 Video image self-adaptive splicing method and device based on video frame matching information
CN109636714A (en) * 2018-08-30 2019-04-16 沈阳聚声医疗系统有限公司 A kind of image split-joint method of ultrasonic wide-scene imaging
CN111010590B (en) * 2018-10-08 2022-05-17 阿里巴巴(中国)有限公司 Video clipping method and device
CN110049351B (en) 2019-05-23 2022-01-25 北京百度网讯科技有限公司 Method and device for deforming human face in video stream, electronic equipment and computer readable medium
CN110136229B (en) * 2019-05-27 2023-07-14 广州亮风台信息科技有限公司 Method and equipment for real-time virtual face changing
CN110572534A (en) * 2019-09-19 2019-12-13 浙江大搜车软件技术有限公司 Digital video image stabilization method, device, equipment and storage medium of panoramic image
CN110853140A (en) * 2019-10-11 2020-02-28 北京空间机电研究所 DEM (digital elevation model) -assisted optical video satellite image stabilization method
CN111327840A (en) * 2020-02-27 2020-06-23 努比亚技术有限公司 Multi-frame special-effect video acquisition method, terminal and computer readable storage medium
CN111462029B (en) * 2020-03-27 2023-03-03 阿波罗智能技术(北京)有限公司 Visual point cloud and high-precision map fusion method and device and electronic equipment
CN111311743B (en) * 2020-03-27 2023-04-07 北京百度网讯科技有限公司 Three-dimensional reconstruction precision testing method and device and electronic equipment
CN111583280B (en) * 2020-05-13 2022-03-15 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN111626246B (en) * 2020-06-01 2022-07-15 浙江中正智能科技有限公司 Face alignment method under mask shielding
CN111666911A (en) * 2020-06-13 2020-09-15 天津大学 Micro-expression data expansion method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6181806B1 (en) * 1993-03-29 2001-01-30 Matsushita Electric Industrial Co., Ltd. Apparatus for identifying a person using facial features
US6016148A (en) * 1997-06-06 2000-01-18 Digital Equipment Corporation Automated mapping of facial images to animation wireframes topologies
US20100302643A1 (en) * 2007-05-09 2010-12-02 Felix Rodriguez Larreta Image-producing apparatus
US20100189361A1 (en) * 2009-01-28 2010-07-29 Seiko Epson Corporation Image processing apparatus for detecting coordinate positions of characteristic portions of face
US8290278B2 (en) * 2009-02-10 2012-10-16 Seiko Epson Corporation Specifying position of characteristic portion of face image
US20130079911A1 (en) * 2011-09-27 2013-03-28 University Of Science And Technology Of China Method and device for generating morphing animation
US20180084260A1 (en) * 2016-09-16 2018-03-22 Qualcomm Incorporated Offset vector identification of temporal motion vector predictor
US10812791B2 (en) * 2016-09-16 2020-10-20 Qualcomm Incorporated Offset vector identification of temporal motion vector predictor
US11423556B2 (en) * 2016-12-06 2022-08-23 Activision Publishing, Inc. Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
US11421905B2 (en) * 2018-06-14 2022-08-23 Panasonic Intellectual Property Management Co., Ltd. Information processing method, recording medium, and information processing system
US20210365707A1 (en) * 2020-05-20 2021-11-25 Qualcomm Incorporated Maintaining fixed sizes for target objects in frames

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116386074A (en) * 2023-06-07 2023-07-04 青岛雅筑景观设计有限公司 Intelligent processing and management system for garden engineering design data
CN117714903A (en) * 2024-02-06 2024-03-15 成都唐米科技有限公司 Video synthesis method and device based on follow-up shooting and electronic equipment

Also Published As

Publication number Publication date
CN112508773A (en) 2021-03-16
US11532069B2 (en) 2022-12-20
EP4002261A1 (en) 2022-05-25
CN112508773B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
US11532069B2 (en) Method for processing image, electronic equipment, and storage medium
US20200302615A1 (en) Repositioning method and apparatus in camera pose tracking process, device, and storage medium
US11978219B2 (en) Method and device for determining motion information of image feature point, and task performing method and device
WO2021008456A1 (en) Image processing method and apparatus, electronic device, and storage medium
US10032076B2 (en) Method and device for displaying image
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN107977934B (en) Image processing method and device
US11962897B2 (en) Camera movement control method and apparatus, device, and storage medium
JP2016522437A (en) Image display method, image display apparatus, terminal, program, and recording medium
CN106503682B (en) Method and device for positioning key points in video data
US11816924B2 (en) Method for behaviour recognition based on line-of-sight estimation, electronic equipment, and storage medium
CN111083513B (en) Live broadcast picture processing method and device, terminal and computer readable storage medium
CN110807769B (en) Image display control method and device
EP3650990A1 (en) Method and apparatus for adjusting holographic content and computer readable storage medium
EP3438924B1 (en) Method and device for processing picture
CN111127541B (en) Method and device for determining vehicle size and storage medium
US9665925B2 (en) Method and terminal device for retargeting images
KR20150084158A (en) Mobile terminal and controlling method thereof
EP3982291A1 (en) Object pick and place detection system, method and apparatus
US9619016B2 (en) Method and device for displaying wallpaper image on screen
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN113012211A (en) Image acquisition method, device, system, computer equipment and storage medium
CN111918089A (en) Video stream processing method, video stream display method, device and equipment
CN112184802A (en) Calibration frame adjusting method and device and storage medium
US11790692B2 (en) Method for behaviour recognition, electronic equipment, and storage medium

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HU, XIAN;DENG, WEI;YI, JUN;REEL/FRAME:056406/0662

Effective date: 20210527

Owner name: XIAOMI TECHNOLOGY (WUHAN) CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HU, XIAN;DENG, WEI;YI, JUN;REEL/FRAME:056406/0662

Effective date: 20210527

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE