CN112488085A - Face fusion method, device, equipment and storage medium - Google Patents

Face fusion method, device, equipment and storage medium Download PDF

Info

Publication number
CN112488085A
CN112488085A CN202011582832.0A CN202011582832A CN112488085A CN 112488085 A CN112488085 A CN 112488085A CN 202011582832 A CN202011582832 A CN 202011582832A CN 112488085 A CN112488085 A CN 112488085A
Authority
CN
China
Prior art keywords
image
face
target
feature
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011582832.0A
Other languages
Chinese (zh)
Inventor
李亚洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202011582832.0A priority Critical patent/CN112488085A/en
Publication of CN112488085A publication Critical patent/CN112488085A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a face fusion method, a device, equipment and a storage medium, wherein the face fusion method comprises the following steps: acquiring a first face image of a target object, wherein the target object is an object in an image acquisition range; according to the first person image, at least one person image with the similarity meeting a first preset condition with the first person image is obtained, and the at least one person image is displayed; determining at least one target image from the at least one character image according to a first instruction of the at least one target object, and acquiring a second face image of the target object; and obtaining at least one fused image according to the at least one target image and the second face image, and displaying the fused image. By displaying the figure image and the fusion image, the functions of image processing scenes such as face check-in and the like are enriched, and the use experience of a user is improved.

Description

Face fusion method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for face fusion.
Background
With the development of artificial intelligence technology, image processing is gradually applied to many scenes, for example, different face images are fused to generate a fused image. In the related art, the face images for fusion are selected randomly, so that the generated fusion image has poor quality and unnatural fusion effect.
Disclosure of Invention
The invention provides a face fusion method, a face fusion device, face fusion equipment and a storage medium, which aim to overcome the defects in the related art.
According to a first aspect of the embodiments of the present invention, there is provided a face fusion method, including:
acquiring a first face image of a target object, wherein the target object is an object in an image acquisition range;
according to the first person image, at least one person image with the similarity meeting a first preset condition with the first person image is obtained, and the at least one person image is displayed;
determining at least one target image from the at least one character image according to a first instruction of the target object, and acquiring a second face image of the target object;
and obtaining at least one fused image according to the at least one target image and the second face image, and displaying the at least one fused image.
The character images are screened according to the similarity, the degree of correlation between the selected character images and the faces of the target objects can be increased, the character images are displayed, the functions under a face fusion scene can be enriched, the use experience of a user is improved, the screening is further carried out in the displayed character images according to the first instruction of the target objects, the degree of correlation between the determined target images and the faces of the target objects can be further increased, therefore, the quality of the fusion images is improved, the fusion effect is natural, the selection of the target images conforms to the intention of the target objects, and the use experience of the user is further improved.
In combination with any one of the embodiments provided by the present disclosure, obtaining at least one fused image according to the at least one target image and the second face image includes:
extracting a first face feature of the second face image and a second face feature of the at least one target image, and generating at least one fusion feature according to the first face feature and the second face feature;
and mapping the at least one fusion feature to a face region of the at least one target image, and/or mapping the at least one fusion feature to a face region of the second face image to obtain at least one fusion image.
By extracting, fusing and mapping the human face features, the completed image fusion can form fusion images in various forms, the quality of the fusion images is further improved, and the fusion effect is more natural.
In combination with any one of the embodiments provided by the present disclosure, the extracting first facial features of the second facial image includes:
under the condition that a plurality of second face images are obtained, selecting second face images meeting second preset conditions from the plurality of second face images, extracting face features of the selected second face images, and taking each extracted face feature as a first face feature; or respectively extracting the face features of the second face images, and fusing the extracted face features to obtain a first face feature;
the mapping the at least one fused feature to a face region of the second face image comprises:
and under the condition of acquiring a plurality of second face images, selecting the second face images meeting a third preset condition from the plurality of second face images, and mapping each fusion feature to the face area of each selected second face image.
Aiming at one or more second face images, different diversified fusion modes are provided, so that the quality and the interestingness of the fused image are further improved, and the use experience of a user is improved.
In combination with any one of the embodiments provided by the present disclosure, the extracting second facial features of the at least one target image includes:
extracting the face features of each target image, and taking each extracted face feature as a second face feature; or extracting the face features of each target image, and fusing the extracted face features to obtain a second face feature;
generating at least one fused feature from the first and second facial features, comprising:
fusing each first face feature with each second face feature to obtain at least one fused feature;
the mapping the at least one fused feature to the face region of the at least one target image comprises:
and mapping each fusion feature to the face region of each target image in the at least one target image respectively.
Aiming at one or more target images, different diversified fusion modes are respectively provided, so that the quality and the interestingness of the fused image are further improved, and the use experience of a user is improved.
In combination with any one of the embodiments provided by the present disclosure, the extracting the face features of each target image includes:
and extracting the feature of the specified area of the target image as the face feature of the target image according to a second instruction of the target object, wherein the second instruction is used for indicating the position of the specified area.
By selecting the specific part of the target image for feature extraction, the pertinence of feature extraction can be improved, the quality and effect of the fused image are further improved, and the use experience of a user is improved.
In combination with any embodiment provided by the present disclosure, the acquiring at least one person image whose similarity to the first person image satisfies a first preset condition includes:
at least one person image with the similarity meeting a first preset condition with the first face image is obtained from at least one type of image in a preset image library, wherein the image library comprises at least one type of image.
The method can select only one or more types of images in the image library according to the requirements, thereby reducing the selection base number, improving the selection efficiency and accuracy, and in addition, classifying the images, facilitating the image management in the image library and improving the management efficiency.
In connection with any embodiment provided by the present disclosure, the image library includes at least one of the following types of images: an image of a historical visiting subject of a target location, a celebrity representation, and an image of a work within the target location.
By storing the at least one type of images, the types to be selected of the character images can be enriched, the degree of correlation between the selected character images and the face of the target object is further improved, and the use experience of a user can be improved.
In combination with any embodiment provided by the present disclosure, the displaying the at least one person image includes:
displaying the person image and displaying the similarity of the person image and the first face image; and/or the presence of a gas in the gas,
and displaying the figure image and the identification information corresponding to the figure image.
The method and the device have the advantages that the similarity data of the character image and the identification information of the character image are displayed while the character image is displayed, so that a target user can feel the similarity between the character image and the target user in a perception mode, the similarity between the character image and the target user is reasonably known, the method and the device are more interesting and visual, and the use experience of the user is further improved.
In combination with any one of the embodiments provided by the present disclosure, the acquiring a second face image of a target object includes:
determining the first face image as a second face image; and/or the presence of a gas in the gas,
and performing at least one of the following processing on the first face image, and determining the processed image as a second face image, wherein the at least one processing comprises cutting, rendering, zooming, rotating and sharpness adjusting.
The second face image can be obtained by collecting the image for the target object without using an image collecting device, and the first face image can be processed in various ways, so that the second face images in various styles can be obtained.
In combination with any one of the embodiments provided by the present disclosure, the acquiring a second face image of a target object includes:
under the condition that the target object enters the image acquisition range, acquiring a third face image of the target object, and generating first prompt information according to the third face image and the target image, wherein the first prompt information is used for prompting the target object to adjust a face angle;
and responding to that the face angle of the target object meets a fourth preset condition, and acquiring a second face image of at least one angle of the target object.
Through the first prompt information, the target object can be prompted to adjust the face angle, so that a second face image meeting the requirements is obtained, the angle accuracy of the second face image is further improved, and the quality of a subsequent fusion image is further improved.
In combination with any one of the embodiments provided by the present disclosure, the generating first prompt information according to the third face image and the target image includes:
extracting the face key points of the third face image and the target key points of the target image;
determining the actual angle between the face in the third face image and the face in the target image according to the face key points and the target key points;
and determining first display information according to the actual angle and a target angle, wherein the target angle is an angle meeting the fourth preset condition.
By acquiring and comparing the key points, the actual angle between the face in the third face image and the face in the target image can be accurately determined, so that the first prompt information generated based on the actual angle is very accurate and has real-time performance, and the acquisition efficiency and quality of the second face image are improved.
In combination with any embodiment provided by the present disclosure, the acquiring a second face image of at least one angle of the target object includes:
acquiring a second face image of the target object at a first angle, wherein the first angle is an angle matched with the orientation of the image acquisition equipment; and/or
And acquiring a second face image of at least one second angle of the target object, wherein the second angle is an angle meeting a fifth preset condition with the angle of the face in the target image.
The acquisition mode of the second face images at various angles makes the acquired second face images rich and diverse, increases the diversity of the function, makes subsequent fusion results more diverse and more interesting, solves the problem of unnatural fusion effect caused by too large deviation of the face angles, and further improves the use experience of users.
In connection with any embodiment provided by the present disclosure, further comprising:
acquiring a permission verification result of the target object;
and when the authority verification result of the target object indicates that the target object is a legal user, acquiring a first face image of the target object.
The face fusion method can avoid the face fusion of an illegal user by using the method through the permission verification result, thereby improving the operation safety of the face fusion method.
According to a second aspect of the embodiments of the present invention, there is provided a face fusion apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first face image of a target object, and the target object is an object in an image acquisition range;
the display module is used for acquiring at least one person image with the similarity meeting a first preset condition with the first person image according to the first person image and displaying the at least one person image;
the instruction module is used for determining at least one target image from the at least one character image according to a first instruction of the target object and acquiring a second face image of the target object;
and the fusion module is used for obtaining at least one fusion image according to the at least one target image and the second face image and displaying the at least one fusion image.
The character images are screened according to the similarity, the degree of correlation between the selected character images and the faces of the target objects can be increased, the character images are displayed, the functions under a face fusion scene can be enriched, the use experience of a user is improved, the screening is further carried out in the displayed character images according to the first instruction of the target objects, the degree of correlation between the determined target images and the faces of the target objects can be further increased, therefore, the quality of the fusion images is improved, the fusion effect is natural, the selection of the target images conforms to the intention of the target objects, and the use experience of the user is further improved.
In combination with any one of the embodiments provided by the present disclosure, the fusion module is specifically configured to:
extracting a first face feature of the second face image and a second face feature of the at least one target image, and generating at least one fusion feature according to the first face feature and the second face feature;
and mapping the at least one fusion feature to a face region of the at least one target image, and/or mapping the at least one fusion feature to a face region of the second face image to obtain at least one fusion image.
By extracting, fusing and mapping the human face features, the completed image fusion can form fusion images in various forms, the quality of the fusion images is further improved, and the fusion effect is more natural.
In combination with any embodiment provided by the present disclosure, when the fusion module is configured to extract the first facial feature of the second facial image, the fusion module is specifically configured to:
under the condition that a plurality of second face images are obtained, selecting second face images meeting second preset conditions from the plurality of second face images, extracting face features of the selected second face images, and taking each extracted face feature as a first face feature; or respectively extracting the face features of the second face images, and fusing the extracted face features to obtain a first face feature;
the fusion module is configured to, when mapping the at least one fusion feature to a face region of the second face image, specifically:
and under the condition of acquiring a plurality of second face images, selecting the second face images meeting a third preset condition from the plurality of second face images, and mapping each fusion feature to the face area of each selected second face image.
Aiming at one or more second face images, different diversified fusion modes are provided, so that the quality and the interestingness of the fused image are further improved, and the use experience of a user is improved.
In combination with any embodiment provided by the present disclosure, when the fusion module is configured to extract the second facial feature of the at least one target image, the fusion module is specifically configured to:
extracting the face features of each target image, and taking each extracted face feature as a second face feature; or extracting the face features of each target image, and fusing the extracted face features to obtain a second face feature;
the fusion module is configured to, when generating at least one fusion feature according to the first facial feature and the second facial feature, specifically:
fusing each first face feature with each second face feature to obtain at least one fused feature;
the fusion module is configured to, when mapping the at least one fusion feature to the face region of the at least one target image, specifically:
and mapping each fusion feature to the face region of each target image in the at least one target image respectively.
Aiming at one or more target images, different diversified fusion modes are respectively provided, so that the quality and the interestingness of the fused image are further improved, and the use experience of a user is improved.
In combination with any embodiment provided by the present disclosure, when the fusion module is used to extract the face features of each target image, the fusion module is specifically configured to:
and extracting the feature of the specified area of the target image as the face feature of the target image according to a second instruction of the target object, wherein the second instruction is used for indicating the position of the specified area.
By selecting the specific part of the target image for feature extraction, the pertinence of feature extraction can be improved, the quality and effect of the fused image are further improved, and the use experience of a user is improved.
In combination with any embodiment provided by the present disclosure, when the display module is configured to acquire at least one person image whose similarity to the first person image satisfies a first preset condition, the display module is specifically configured to:
at least one person image with the similarity meeting a first preset condition with the first face image is obtained from at least one type of image in a preset image library, wherein the image library comprises at least one type of image.
The method can select only one or more types of images in the image library according to the requirements, thereby reducing the selection base number, improving the selection efficiency and accuracy, and in addition, classifying the images, facilitating the image management in the image library and improving the management efficiency.
In connection with any embodiment provided by the present disclosure, the image library includes at least one of the following types of images: an image of a historical visiting subject of a target location, a celebrity representation, and an image of a work within the target location.
By storing the at least one type of images, the types to be selected of the character images can be enriched, the degree of correlation between the selected character images and the face of the target object is further improved, and the use experience of a user can be improved.
In combination with any embodiment provided by the present disclosure, when the display module is configured to display the at least one person image, the display module is specifically configured to:
displaying the person image and displaying the similarity of the person image and the first face image; and/or the presence of a gas in the gas,
and displaying the figure image and the identification information corresponding to the figure image.
The method and the device have the advantages that the similarity data of the character image and the identification information of the character image are displayed while the character image is displayed, so that a target user can feel the similarity between the character image and the target user in a perception mode, the similarity between the character image and the target user is reasonably known, the method and the device are more interesting and visual, and the use experience of the user is further improved.
In combination with any embodiment provided by the present disclosure, when the instruction module is used to acquire the second face image of the target object, the instruction module is specifically configured to:
determining the first face image as a second face image; and/or the presence of a gas in the gas,
and performing at least one of the following processing on the first face image, and determining the processed image as a second face image, wherein the at least one processing comprises cutting, rendering, zooming, rotating and sharpness adjusting.
The second face image can be obtained by collecting the image for the target object without using an image collecting device, and the first face image can be processed in various ways, so that the second face images in various styles can be obtained.
In combination with any embodiment provided by the present disclosure, when the instruction module is used to acquire the second face image of the target object, the instruction module is specifically configured to:
under the condition that the target object enters the image acquisition range, acquiring a third face image of the target object, and generating first prompt information according to the third face image and the target image, wherein the first prompt information is used for prompting the target object to adjust a face angle;
and responding to that the face angle of the target object meets a fourth preset condition, and acquiring a second face image of at least one angle of the target object.
Through the first prompt information, the target object can be prompted to adjust the face angle, so that a second face image meeting the requirements is obtained, the angle accuracy of the second face image is further improved, and the quality of a subsequent fusion image is further improved.
In combination with any embodiment provided by the present disclosure, the instruction module, when generating the first prompt information according to the third face image and the target image, is specifically configured to:
extracting the face key points of the third face image and the target key points of the target image;
determining the actual angle between the face in the third face image and the face in the target image according to the face key points and the target key points;
and determining first display information according to the actual angle and a target angle, wherein the target angle is an angle meeting the fourth preset condition.
By acquiring and comparing the key points, the actual angle between the face in the third face image and the face in the target image can be accurately determined, so that the first prompt information generated based on the actual angle is very accurate and has real-time performance, and the acquisition efficiency and quality of the second face image are improved.
In combination with any embodiment provided by the present disclosure, when the instruction module is configured to acquire the second face image of the target object at least at one angle, the instruction module is specifically configured to:
acquiring a second face image of the target object at a first angle, wherein the first angle is an angle matched with the orientation of the image acquisition equipment; and/or
And acquiring a second face image of at least one second angle of the target object, wherein the second angle is an angle meeting a fifth preset condition with the angle of the face in the target image.
The acquisition mode of the second face images at various angles makes the acquired second face images rich and diverse, increases the diversity of the function, makes subsequent fusion results more diverse and more interesting, solves the problem of unnatural fusion effect caused by too large deviation of the face angles, and further improves the use experience of users.
In combination with any one of the embodiments provided by the present disclosure, the face fusion apparatus further includes an authority verification module, configured to:
acquiring a permission verification result of the target object;
and when the authority verification result of the target object indicates that the target object is a legal user, acquiring a first face image of the target object.
The face fusion method can avoid the face fusion of an illegal user by using the method through the permission verification result, thereby improving the operation safety of the face fusion method.
According to a third aspect of embodiments of the present invention, there is provided an electronic device, the device comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of the first aspect when executing the computer instructions.
The character images are screened according to the similarity, the degree of correlation between the selected character images and the faces of the target objects can be increased, the character images are displayed, the functions under a face fusion scene can be enriched, the use experience of a user is improved, the screening is further carried out in the displayed character images according to the first instruction of the target objects, the degree of correlation between the determined target images and the faces of the target objects can be further increased, therefore, the quality of the fusion images is improved, the fusion effect is natural, the selection of the target images conforms to the intention of the target objects, and the use experience of the user is further improved.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
The character images are screened according to the similarity, the degree of correlation between the selected character images and the faces of the target objects can be increased, the character images are displayed, the functions under a face fusion scene can be enriched, the use experience of a user is improved, the screening is further carried out in the displayed character images according to the first instruction of the target objects, the degree of correlation between the determined target images and the faces of the target objects can be further increased, therefore, the quality of the fusion images is improved, the fusion effect is natural, the selection of the target images conforms to the intention of the target objects, and the use experience of the user is further improved.
According to the embodiment, by acquiring the first face image of the target object, at least one person image whose similarity with the first face image meets the first preset condition may be further acquired and displayed, then, according to the first instruction of the target object, at least one target object is determined from the at least one person image, and the second face image of the target object is acquired, and finally, at least one fused image is acquired and displayed according to the determined at least one target object and the acquired second face image. The character images are screened according to the similarity, the degree of correlation between the selected character images and the faces of the target objects can be increased, the character images are displayed, the functions under a face fusion scene can be enriched, the use experience of a user is improved, the screening is further carried out in the displayed character images according to the first instruction of the target objects, the degree of correlation between the determined target images and the faces of the target objects can be further increased, therefore, the quality of the fusion images is improved, the fusion effect is natural, the selection of the target images conforms to the intention of the target objects, and the use experience of the user is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating a face fusion method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the effect of face fusion according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the display of a person image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a face fusion process according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a face fusion apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
With the development of artificial intelligence technology, image processing is gradually applied to many scenes, for example, different face images are fused to generate a fused image. In the related art, the face images for fusion are selected randomly, so that the generated fusion image has poor quality and unnatural fusion effect.
Based on this, in a first aspect, at least one embodiment of the present invention provides a face fusion method, please refer to fig. 1, which shows a flow of the method, including steps S101 to S104.
The face fusion method can be applied to a face image processing scene, namely, the face image is processed and simultaneously or after the face image processing is finished, the face image and other images are fused, for example, in a face check-in scene frequently appearing in activities such as a travel exhibition, a painting exhibition and the like, a user can perform face fusion after completing check-in of the activities through face recognition. Under the scene of human face image processing such as human face check-in, an image acquisition device for acquiring a human face image and a display device for displaying the human face image are arranged, such as a camera and a display screen, wherein the image acquisition device has a preset acquisition range, such as a view field of the camera, when an object enters the preset acquisition range, the image acquisition device can acquire an image of the object and display the acquired image on the display device so as to be checked by the object and/or a worker, so that the object and/or the worker can respond based on display content. It is to be understood that the above-mentioned scenario examples are not limited to the application scenario of the face fusion method, and for example, scenarios such as movie production and interactive promotion, personalized image customization, etc. may also be applied to the face fusion method.
In addition, the method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA) handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling computer readable instructions stored in a memory. Alternatively, the method may be performed by a server, which may be a local server, a cloud server, or the like.
In step S101, a first face image of a target object is acquired, where the target object is an object within an image capture range.
The first face image may be an image used for face image processing, or may be an image separately acquired by the image acquisition device in this step, that is, after the image acquisition device acquires an image to be processed for image processing, the image to be processed may be directly acquired in this step as the first face image, or the image may be acquired again as the first face image. For example, under the scene of the face check-in that often appears in activities such as the exhibition of a travel and the exhibition of a painting, the user can shoot the image of checking-in when carrying out the activity check-in through face identification, and can directly regard the above-mentioned image of checking-in as first face image in this step, certainly also can be directed at target object and gather the image once more.
The image acquisition device acquires depth information of each object, and takes the object with the minimum depth information as a target object, so as to acquire a first face image aiming at the target object; wherein the depth information is minimal, i.e. closest to the image acquisition device, e.g. closest to the camera.
In addition, the authority of the target object can be verified, and the step is started according to the authority, that is, the authority verification result of the target object can be obtained firstly; and when the authority verification result of the target object indicates that the target object is a legal user, acquiring the first face image of the target object. For example, in a scene of human face check-in frequently occurring in activities such as a travel exhibition, a painting exhibition and the like, the check-in result of the target object may be obtained first, and when the check-in is successful, the first human face image of the target object is obtained, that is, when the check-in of the target object is unsuccessful, the step is not executed.
In step S102, at least one person image whose similarity to the first person image satisfies a first preset condition is acquired according to the first person image of the target object, and the at least one person image is displayed.
The similarity between each image to be selected and the first face image can be determined firstly; and then determining the image to be selected with the similarity to the first human face image meeting a first preset condition as a human image.
The similarity of the face images represents the similarity of the faces in the two images, that is, the higher the similarity is, the more the faces in the two images are. Therefore, the person image selected according to the similarity as the selection criterion can be displayed by using the display device, and the person image is very similar to the face of the target object, so that the attention and interest of the target object are easily attracted, the use experience of the user is improved, and the target object is easy to further operate (namely, input the first instruction) due to the interest.
The feature information of the first face image may be extracted, the feature information of at least one image in the image library may be extracted, and the feature information of the first face image and the feature information of each image may be compared to determine the similarity between the first face image and each image. The first preset condition may be a threshold, and when the similarity is higher than or equal to the threshold, the first preset condition is determined to be satisfied, and when the similarity is within the threshold, the first preset condition is determined not to be satisfied; the first preset condition may also be a selection criterion of the highest similarity, that is, the image with the highest similarity to the first face image is the image satisfying the first preset condition. For example, when a personal image needs to be selected from a certain type of images in the image library, the similarity between each image of the type and the first face image may be determined in the above manner, and the image with the highest similarity may be determined as the personal image satisfying the first preset condition.
When the target object is signed in under the scene of human face sign-in frequently occurring in activities such as a text exhibition, a painting exhibition and the like, the display device synchronously displays the activity scene of the target object, and after the execution of the step of the method is finished, the display device can display the screened character image, so that the monotonous sign-in scene becomes more interesting, and the sign-in experience of a sign-in user is improved.
In step S103, at least one target image is determined from the at least one person image according to the first instruction of the target object, and a second face image of the target object is acquired.
The first instruction may be used to indicate that the user agrees to perform image fusion, and to indicate the person image selected by the user for fusion, that is, to indicate the target image selected by the user. The interface for displaying the character images can be provided with a key for starting image fusion, each character image can also be provided with a key for selecting the character image as a target image, the keys can be inherent physical keys of the display equipment or virtual keys in the interface, and the triggering of the user on the selection key of at least one character image and the triggering of the key for starting image fusion form a first instruction; in addition, the interface for personal images may also have guidance information for guiding the target object for image fusion, such as "do not want to see oneself and what he/she is after fusing? For example, a schematic animation of image fusion is shown, that is, two images are gradually close to each other and finally become a completely new image.
The second face image and the first face image are both images for the target object, and the second face image and the first face image may be the same or different. In addition, one or more second face images can be acquired.
In step S104, at least one fused image is obtained according to the at least one target image and the second face image, and the at least one fused image is displayed.
When a target image is determined, directly fusing the target image and a second face image; when a plurality of target images are determined, each target image can be respectively fused with the second face image to obtain a plurality of fused images, and the plurality of target images and the second face image can be fused to obtain one fused image.
Moreover, when a second face image is obtained, the second face image and a target image to be fused can be directly fused; when a plurality of second face images are obtained, each second face image can be respectively fused with a target image to be fused, and the plurality of second face images and the target image to be fused can also be fused.
In addition, the interface for displaying the fusion image may have explanatory information for explaining the fusion result, such as "see oneself and the sample bar after he/she fuses! "etc. text and/or voice prompt information.
In activities such as the exhibition of travelling and the exhibition of paintings, draw through the image or the celebrity of the works in with the target site and draw the face and fuse, can become oneself and draw a portrait, increased the taste of user with the exhibit interdynamic, passed strong science and technology sense to the user, improved user's use and experienced.
In the embodiment of the disclosure, by acquiring a first face image of a target object, at least one person image whose similarity to the first face image satisfies a first preset condition may be further acquired and displayed, then, according to a first instruction of the target object, at least one target object is determined from the at least one person image, a second face image of the target object is acquired, and finally, at least one fused image is acquired and displayed according to the determined at least one target object and the acquired second face image. The character images are screened according to the similarity, the degree of correlation between the selected character images and the faces of the target objects can be increased, the character images are displayed, functions under a face fusion scene can be enriched, the use experience of a user is improved, further screening is performed on the displayed character images according to the first instruction of the target objects, the degree of correlation between the determined target images and the faces of the target objects can be further increased, therefore, the quality of the fusion images is improved, the fusion effect is natural, the target images are selected according to the intention of the target objects, and the use experience of the user is further improved.
For example, when the face fusion method provided by the embodiment is used in a scene of face check-in which is frequently occurred in activities such as a travel exhibition and a painting exhibition, the correlation degree between the face image of the target object and the image to be fused (for example, the target object) can be increased, the quality of the fused image is improved, and the fusion effect becomes natural; moreover, functions under human face image processing scenes such as human face check-in can be enriched, and the use experience of a user is improved.
In some embodiments of the present disclosure, at least one fused image may be obtained according to the at least one target image and the second face image in the following manner: firstly, extracting a first face feature of the second face image and a second face feature of the at least one target image, and generating at least one fusion feature according to the first face feature and the second face feature; then, the at least one fusion feature is mapped to the face region of the at least one target image, and/or the at least one fusion feature is mapped to the face region of the second face image, so as to obtain at least one fusion image.
The face region of the target image comprises a region where an original face is located in the target image, and the face region of the second face image comprises a region where the original face is located in the second face image. The first face features and the second face features may be subjected to weighted summation, so as to obtain a fused feature, that is, the fused feature includes a certain component of the first face features and a certain component of the second face features. Of course, the fused feature may also be a first facial feature (i.e., the first facial feature has a weight of 1, and the second facial feature has a weight of 0) or a second facial feature (i.e., the second facial feature has a weight of 1, and the first facial feature has a weight of 0). When the fusion feature is the first face feature, the face region of the at least one target image can be mapped in the subsequent step of mapping the fusion feature, because the face region mapped to the second face image is also the second face image, and when the fusion feature is the second face feature, the face region mapped to the second face image can be mapped in the subsequent step of mapping the fusion feature, because the face region mapped to the target image is also the target image.
When the second face image is acquired, one second face image may be acquired, or a plurality of second face images may be acquired. Therefore, when a second face image and at least one target image are fused, the first face features of the second face image can be directly extracted by extracting the first face features of the second face image, and the at least one fusion feature is mapped to the face area of the second face image, so that the at least one fusion feature can be directly and respectively mapped to the face area of the second face image. When a plurality of second face images and the at least one target image are fused, extracting first face features of the second face images, wherein the second face images meeting a second preset condition can be selected from the plurality of second face images, the face features of the selected second face images are extracted, and each extracted face feature is taken as a first face feature, wherein the second preset condition can be a condition about an angle of a face in the second face images, that is, the second face images meeting the second preset condition can be determined according to the angle of the face in the second face images; in addition, one or more second face images may be selected, and thus one or more first face features may be extracted. When a plurality of second face images and the at least one target image are fused, extracting the first face features of the second face images can also be used for extracting the face features of the second face images respectively, and the extracted face features are fused to obtain a first face feature. When a plurality of second face images and the at least one target image are fused, the at least one fusion feature is mapped to the face region of the second face image, a second face image meeting a third preset condition is selected from the plurality of second face images, and then each fusion feature is mapped to the face region of each selected second face image, namely, the mapping action is executed on any one fusion feature and the face region of any one second face image. The third preset condition may be a condition related to an angle of a face in the second face image, that is, the second face image meeting the third preset condition may be determined according to the angle of the face in the second face image, and in addition, the selected second face image may be one or multiple.
The at least one selected target image may be one or a plurality of target images. Therefore, when a target image and a second face image are fused, the second face features of the at least one target image are extracted, so that the second face features of the target image can be directly extracted, and the at least one fused feature is mapped to the at least one target image, so that each fused feature can be directly mapped to the target image. When a plurality of target images and a second face image are fused, extracting second face features of at least one target image can extract the face features of each target image, and each extracted face feature is used as a second face feature; when a plurality of target images and a second face image are fused, extracting the second face feature of at least one target image can also extract the face feature of each target image, and the extracted plurality of face features are fused to obtain the second face feature. When a plurality of target images and a second face image are fused, the at least one fusion feature is mapped to the face area of the at least one target image, each fusion feature can be mapped to the face area of each target image in the at least one target image respectively, that is, at least one fusion feature is taken out in sequence, and after each fusion feature is taken out, the fusion feature is mapped to the face area of each target image.
When at least one fusion feature is generated according to the first face feature and the second face feature, each first face feature and each second face feature can be fused to obtain at least one fusion feature; wherein, when first face characteristic is a plurality of, when second face characteristic is one, fuse every first face characteristic all with this second face characteristic, fuse every time and obtain a fusion characteristic, when first face characteristic is one, when second face characteristic is a plurality of, fuse every second face characteristic all with this first face characteristic, fuse every time and obtain a fusion characteristic, when first face characteristic is a plurality of, when second face characteristic is a plurality of, take out every first face characteristic in proper order, and after every takes out a first face characteristic, all fuse this first face characteristic and every second face characteristic, fuse every time and obtain a fusion characteristic.
When the face feature of each target image is extracted, the feature of the designated area of the target image can be extracted according to a second instruction of the target object, and the second instruction is used for indicating the position of the designated area and is used as the face feature of the target image.
The specified region may be a specific part of a human face, for example, one of five sense organs, that is, extracting the feature of the specified region is to extract the feature of the specific part of the human face, for example, extracting the feature of a nose, extracting the feature of an eye, and the like. By selecting the specific part of the target image for feature extraction, the pertinence of feature extraction can be improved, the quality and effect of the fused image are further improved, and the use experience of a user is further improved.
Therefore, when the target image and the second face image are fused, the specific part of the face in the target image can be fused, and further when a plurality of target images and the second face image are fused, and the second face features are fused with the face features of the plurality of target images, different face parts can be selected from different target images for fusion, for example, when the first target image, the second target image and the second face image are fused, the nose in the first target image can be selected, the eyes in the second target image can be selected, so that the first face feature is extracted from the second face image, the feature of the nose in the first target image is extracted, the feature of the eyes in the second target image is extracted, the feature of the nose and the feature of the eyes are fused to obtain the second face feature, and the first face feature and the second face feature are fused to obtain the fused feature, and finally mapping the fusion features to a face area of at least one of the first target image, the second target image and the second face image to obtain at least one fusion image.
Referring to fig. 2, an exemplary face fusion effect diagram is shown, in which a target image 201 and a second face image 202 are fused to obtain a fused image 203, the fused image 203 is formed by extracting a first face feature of the second face image 202 and a second face feature of the target image 201, performing weighted summation with the weight of the first face image being 1 and the weight of the second face feature being 0 to obtain a fused feature, and mapping the fused feature into a face region of the target image 201.
In the embodiment, the image fusion is completed through the operations of extracting, fusing and mapping the human face features, and fusion images in various forms can be formed; and aiming at one or more second face images and one or more target images, different diversified fusion modes are respectively provided, so that the interestingness of fusing the images is further improved, and the use experience of a user is improved.
In some embodiments of the present disclosure, at least one person image whose similarity to the first person image satisfies a first preset condition may be acquired as follows: at least one person image with the similarity meeting a first preset condition with the first face image is obtained from at least one type of image in a preset image library, wherein the image library comprises at least one type of image.
The images in the image library are classified in advance, and people images can be selected from different types of images according to needs when being selected, so that the efficiency and the accuracy of selecting the people images are improved. For example, when only images in one or more types (the number of the images is less than that of all types) of all types in the image library are needed, the selection can be performed only on the images in the one or more types, so that the selection base number can be reduced, and the selection efficiency and accuracy can be improved. In addition, the images are classified, so that the management of the images in the image library is facilitated, and the management efficiency is improved.
The image acquisition equipment acquires images of visiting objects in a target place, the target objects can be the visiting objects of which the images are acquired by the image acquisition equipment, and the image acquisition equipment is used for acquiring the images of the visiting objects in scenes such as a travel exhibition, a painting exhibition and the like so as to sign in by faces. Thus, at least one of the following types of images may be included within the image library: an image of a historical visiting subject of the target location, a celebrity portrait, and an image of a work within the target location. The image of the historical visited object may be an image of a visited object acquired by the image acquisition device within a preset time, for example, the image of another visited object acquired before the target object is acquired by the image acquisition device as the first face image, and the historical visited object is an image of all visited objects checked in before the target object in a face check-in scene of a tourist show or a painting show; the image of the historical visited object may be bound with identification information of the historical visited object, for example, the image of the historical check-in person may be bound with identity information of the check-in person. The image of the work in the target place can be an electronic image corresponding to the figure painting in the exhibition, or a figure electronic image related to the work, the theme or the content in the travel exhibition; the celebrity image may be an image of a person having a certain degree of awareness, for example, an electronic image (imperial concubine, tangXuan, etc.) corresponding to an image of a historical celebrity, an electronic image (for which a portrait use permission of a person concerned must be obtained) corresponding to an image of a star of each country, and the like.
In one example, a person image whose similarity to the first face image satisfies a first preset condition may be respectively selected from each type of image (for example, an image having the highest similarity to the first face image may be respectively selected from each type of image as the person image). Under the scene of face check-in that often appears in activities such as the exhibition of a book and the exhibition of paintings, can follow the picture of historical visiting object and select one with the personage's image that first person's face image's similarity satisfies first preset condition, select one from the picture of the work in the target place with the personage's image that first person's face image's similarity satisfies first preset condition, select one from the personage portrait with the personage's image of first person's face image's similarity satisfies first preset condition.
In addition, various types of images can be entered into the image library in advance.
Based on the above-described manner of selecting a person image with the degree of similarity as a selection criterion, in some embodiments of the present disclosure, the at least one person image is displayed in the following manner: displaying the person image on a preset display device, and displaying the similarity between the person image and the first face image.
That is to say, the person image with high similarity to the target object is displayed, and simultaneously, the similarity data of the person image and the target object is displayed, so that the target user can feel the similarity between the person image and the target user in the perception, and simultaneously, the similarity between the person image and the target user is reasonably known, so that the method is more interesting and intuitive, and the use experience of the user is further improved. For example, in a scene of face check-in, a user can see strangers closest to the user in the same space when the face check-in is carried out, so that the user feels a sense of novelty when the user looks like another person in an exhibition hall; the image of the work or celebrity portrait in the target place similar to the exhibition hall can be seen.
In addition to displaying the similarity between the personal image and the image, identification information corresponding to the personal image may be further displayed, that is, identification information may be generated from the personal image. For example, in a scene of a check-in of a face that often appears in activities such as a travel exhibition and a paintshow, in a case where a person image is an image of a historical visiting object, identification information of the historical visiting object corresponding to the person image may be further displayed on a preset display device, and for example, "in this activity, a similarity between zhang and you reaches 70%, there is no interest and he/she knows? "and the like, wherein Zhang III is the identification information of the historical visiting object; in the case where the person image is an image of a work in the target place, identification information of the work corresponding to the person image may be further displayed on a preset display device, and for example, "in the activity," the old man in the "old man in the field" has a similarity of 65% with the old man, and is not interested in visiting the work? "the descriptive information in the form of words or voice, wherein the" old man in the field "is the identification information of the works; in the case where the character image is a celebrity image, identification information of the celebrity image corresponding to the character image may be further displayed on a preset display device, for example, "a certain star of the star Zhao has a similarity of 75% with you," you may try to be present as the star. "and some of them is the identification information of famous person portrait.
Referring to fig. 3, which schematically shows a display page displaying a person image, as can be seen from fig. 3, the display page includes a first person image of a target object and identification information, namely, "zhang san" and its corresponding image (shown as the image above zhang san in the figure); the display page further includes an image of the historical visited object, similarity with the first face image, identification information, description information, and the like, that is, identification information "king X one" and an image corresponding to the king X one (shown as an image above the king X one in the figure), similarity (that is, "similarity: 70%") and description information "in a vast sea, a person who is similar in length and can find and know his next bar" in the same place; the display page also comprises a celebrity portrait, similarity with the first face image, identification information, description information and the like, namely the identification information' most similar to you in the exhibition hall is: the image of the imperial concubine Yang and the corresponding image (the image above the identification information is displayed in the image), the similarity (namely the similarity: 60 percent in the image) and the description information aiming at the celebrity portrait like the true picture without the relationship of blood, and meanwhile, the display page also comprises a key for starting face fusion, namely 'clicking me to change face', which indicates that the face fusion is carried out with the celebrity portrait (namely the portrait of the imperial concubine Yang).
In some embodiments of the present disclosure, the second face image of the target object may be acquired in at least one of the following two ways.
In the first mode, the first face image is determined as a second face image; and/or performing at least one of the following processing on the first face image, and determining the processed image as a second face image, wherein the at least one processing comprises cutting, rendering, zooming, rotating and sharpness adjusting.
That is to say, in this step, an image acquisition device is not used to acquire an image for the target object, but the first face image acquired in step S101 is processed to obtain a second face image, and a specific processing mode may be preset in advance or may be selected in real time by the target object.
In a second mode, firstly, under the condition that the target object enters the image acquisition range, a third face image of the target object is acquired, and first prompt information is generated according to the third face image and the target image, wherein the first prompt information is used for prompting the target object to adjust the face angle; and then, responding to that the face angle of the target object meets a fourth preset condition, and acquiring a second face image of at least one angle of the target object.
And generating second prompt information before the target object enters the image acquisition range, wherein the second prompt information is used for prompting the target object to enter the image acquisition range. In addition, the first prompt message and the second prompt message include at least one of: text alert messages, voice alert messages, and animation alert messages. That is to say, the first prompting message may be a text prompting message, a voice prompting message, an action prompting message, a text prompting message, a voice prompting message, a text prompting message, an action prompting message, a text prompting message, a voice prompting message, an animation prompting message; the second prompting message can be a text prompting message, a voice prompting message, an action prompting message, a text prompting message, a voice prompting message, a text prompting message, an action prompting message, a text prompting message, a voice prompting message and an animation prompting message.
The second prompt information is used for enabling the target object to know that image acquisition needs to be carried out again, and therefore the target object enters the image acquisition range under guidance. The third face image may be a real-time image of the target object, that is, the target object enters an image acquisition range, and the image acquisition device acquires the face image of the target object in real time to generate the third face image, so that the third face image is not an image but a general name of images shot in a time period.
The first prompt information is used for adjusting the target object to an angle required by image acquisition, namely an angle required by photographing. Second prompt information can be generated according to the third face image and the target image in the following mode, including: firstly, extracting a face key point of the third face image and a target key point of the target image; and then, determining an actual angle between the face in the third face image and the face in the target image according to the face key point and the target key point, and finally determining first prompt information according to the actual angle and the target angle, wherein the target angle is an angle meeting a fourth preset condition. By acquiring and comparing the key points, the angle between the face in the third face image and the face in the target image can be accurately determined, so that the first prompt information can be generated according to the angle requirement in the fourth preset condition. In addition, the angle of the face in the third face image relative to the image acquisition equipment can be determined according to the face key point, and the first prompt information is generated according to the angle requirement in the fourth preset condition.
The third face image is a real-time image acquired by the image acquisition equipment, that is, the third face images are multiple, so that each third face image generates at least one piece of first prompt information, and after the face angle of the target object is adjusted according to the first prompt information, new first prompt information is generated again until the face angle of the target object meets a fourth preset condition, that is, the face angle in the third face image meets the fourth preset condition.
The fourth preset condition may be one angle requirement or multiple angle requirements, and each angle requirement corresponds to one second face image. Optionally, the following angles of face images are obtained: acquiring a second face image of the target object at a first angle, wherein the first angle is an angle matched with the orientation of image acquisition equipment; and/or acquiring a second face image of at least one second angle of the target object, wherein the second angle is an angle which meets a fifth preset condition with the angle of the face in the target image. When the second face image at the first angle is acquired, the angle between the face of the target object and the image acquisition equipment can be determined according to the face key point of the third face image when the third face image is acquired, and first prompt information is further generated, so that the face of the target object is adjusted to the angle matched with the orientation of the image acquisition equipment, for example, the first angle is the angle matched with the orientation of the image acquisition equipment and is the angle just facing the orientation of the image acquisition equipment, the face key point of the third face image displays that the face of the target object is 45 degrees to the left relative to the right direction of the image acquisition equipment, and therefore the second prompt information prompts that the target object turns to the right. When the second face image at the second angle is acquired, when the third face image is acquired, the angle between the face of the target object and the face of the target image is determined according to the key point of the face and the key point of the target image, and further first prompt information is generated, so that the target object adjusts the face to an angle which satisfies a fifth preset condition with the face of the target image, for example, the fifth preset condition may be that the difference between the angle and the angle of the face in the target image is within a certain angle range (e.g., 10 °).
The two modes can be used alternatively or in combination, that is, the second face image can be determined according to the first face image only by adopting the first mode; or only a second face image at a first angle can be acquired; or only a second face image at a second angle can be acquired; a second face image can also be determined according to the first face image, and the second face image at the first angle is obtained; a second face image can also be determined according to the first face image, and a second face image at a second angle is obtained; a second face image at a first angle can also be obtained, and a second face image at a second angle is obtained; and determining a second face image according to the first face image, acquiring the second face image at the first angle, and acquiring the second face image at the second angle. Above-mentioned multiple second facial image's acquisition mode makes the second facial image who acquires richen various, has increased the variety of this function, makes subsequent integration result more various, and is more interesting, has solved the unnatural problem of the amalgamation effect that the too big lead to of face angular deviation simultaneously, further improves user's use and experiences.
Because the plurality of second face images can be obtained in the above manner, when at least one target object and the face image are fused, a second face image satisfying a second preset condition or a third preset condition is selected from the plurality of second face images, and the second face image generated according to the first face image, the second face image at the first angle, and the second face image at the second angle can be selected, that is, the second preset condition and the third preset condition can be related to the above manner of obtaining the second face image.
In some embodiments of the present disclosure, at least one of the first facial image and the second facial image may be further stored and bound with identification information of the target object.
After the first face image of the target object is obtained, the first face image can be stored in an image library and is bound with identification information of the target object, for example, under a scene that faces often appear in activities such as a travel exhibition, a painting exhibition and the like, the target object is a sign-in person, the identification information is identity information, the first face image of the sign-in person is stored and is bound with the identity information of the sign-in person, the target object can become a history visiting object after image processing and face fusion of the target object are finished, and therefore a storage result of the first face image is put under the type of the image of the history visiting object.
After the second face image of the target object is obtained, the second face image can be stored in the image library and bound with the identification information of the target object, for example, in a scene of face check-in which often appears in activities such as a travel exhibition, a painting exhibition and the like, the target object is a check-in person, and the identification information is identity information, so that the second face image of the check-in person is stored and bound with the identity information of the check-in person, and the target object becomes a history visiting object after image processing and face fusion are finished, so that the storage result of the second face image is put under the type of the image of the history visiting object.
In the embodiment, the images of the historical visiting objects in the image library can be further enriched by storing the first face image and/or the second face image, the images of the same object can be more diversified, so that the cardinality and the form of the image library are increased, a more satisfactory result can be obtained when the person image is obtained for a subsequent target object, and the use experience of a user is further improved.
Referring to FIG. 4, a face fusion process is illustratively shown that runs on a face check-in scene at a cartoon show. As shown in fig. 4, first, step 411: inputting the exhibition hall portrait and the celebrity portrait into an image library; then, step 421: the check-in user identifies the face of the check-in user; then, step 431: a first face image is recorded into an image library; step 432: comparing the similarity with the image library Zhao worker image; then, step 433: displaying similar historical sign-in personnel and exhibition hall images; then, step 422: the check-in user selects the portrait to carry out face fusion; then, step 434: photographing and voice reminding a person who signs in to adjust the angle; then, step 435: fusing human faces; then, step 436: and displaying the fused image. Wherein, step 411 is background personnel wiping operation; the steps 421 and 422 are the operations of the check-in personnel; steps 431, 432, 433, 434, 435 and 436 are steps that are automatically performed inside the software.
According to a second aspect of the embodiments of the present invention, referring to fig. 5, a schematic structural diagram of a face fusion apparatus is provided, which includes:
an obtaining module 501, configured to obtain a first face image of a target object, where the target object is an object within an image acquisition range;
a display module 502, configured to obtain, according to the first person image, at least one person image whose similarity to the first person image meets a first preset condition, and display the at least one person image;
an instruction module 503, configured to determine at least one target image from the at least one person image according to a first instruction of the target object, and obtain a second face image of the target object;
a fusion module 504, configured to obtain at least one fusion image according to the at least one target image and the second face image, and display the at least one fusion image.
In combination with any one of the embodiments provided by the present disclosure, the fusion module is specifically configured to:
extracting a first face feature of the second face image and a second face feature of the at least one target image, and generating at least one fusion feature according to the first face feature and the second face feature;
and mapping the at least one fusion feature to a face region of the at least one target image, and/or mapping the at least one fusion feature to a face region of the second face image to obtain at least one fusion image.
In combination with any embodiment provided by the present disclosure, when the fusion module is configured to extract the first facial feature of the second facial image, the fusion module is specifically configured to:
under the condition that a plurality of second face images are obtained, selecting second face images meeting second preset conditions from the plurality of second face images, extracting face features of the selected second face images, and taking each extracted face feature as a first face feature; or respectively extracting the face features of the second face images, and fusing the extracted face features to obtain a first face feature;
the fusion module is configured to, when mapping the at least one fusion feature to a face region of the second face image, specifically:
and under the condition of acquiring a plurality of second face images, selecting the second face images meeting a third preset condition from the plurality of second face images, and mapping each fusion feature to the face area of each selected second face image.
In combination with any embodiment provided by the present disclosure, when the fusion module is configured to extract the second facial feature of the at least one target image, the fusion module is specifically configured to:
extracting the face features of each target image, and taking each extracted face feature as a second face feature; or extracting the face features of each target image, and fusing the extracted face features to obtain a second face feature;
the fusion module is configured to, when generating at least one fusion feature according to the first facial feature and the second facial feature, specifically:
fusing each first face feature with each second face feature to obtain at least one fused feature;
the fusion module is configured to, when mapping the at least one fusion feature to the face region of the at least one target image, specifically:
and mapping each fusion feature to the face region of each target image in the at least one target image respectively.
In combination with any embodiment provided by the present disclosure, when the fusion module is used to extract the face features of each target image, the fusion module is specifically configured to:
and extracting the feature of the specified area of the target image as the face feature of the target image according to a second instruction of the target object, wherein the second instruction is used for indicating the position of the specified area.
In combination with any embodiment provided by the present disclosure, when the display module is configured to acquire at least one person image whose similarity to the first person image satisfies a first preset condition, the display module is specifically configured to:
at least one person image with the similarity meeting a first preset condition with the first face image is obtained from at least one type of image in a preset image library, wherein the image library comprises at least one type of image.
In connection with any embodiment provided by the present disclosure, the image library includes at least one of the following types of images: an image of a historical visiting subject of a target location, a celebrity representation, and an image of a work within the target location.
In combination with any embodiment provided by the present disclosure, when the display module is configured to display the at least one person image, the display module is specifically configured to:
displaying the person image and displaying the similarity of the person image and the first face image; and/or the presence of a gas in the gas,
and displaying the figure image and the identification information corresponding to the figure image.
In combination with any embodiment provided by the present disclosure, when the instruction module is used to acquire the second face image of the target object, the instruction module is specifically configured to:
determining the first face image as a second face image; and/or the presence of a gas in the gas,
and performing at least one of the following processing on the first face image, and determining the processed image as a second face image, wherein the at least one processing comprises cutting, rendering, zooming, rotating and sharpness adjusting.
In combination with any embodiment provided by the present disclosure, when the instruction module is used to acquire the second face image of the target object, the instruction module is specifically configured to:
under the condition that the target object enters the image acquisition range, acquiring a third face image of the target object, and generating first prompt information according to the third face image and the target image, wherein the first prompt information is used for prompting the target object to adjust a face angle;
and responding to that the face angle of the target object meets a fourth preset condition, and acquiring a second face image of at least one angle of the target object.
In combination with any embodiment provided by the present disclosure, the instruction module, when generating the first prompt information according to the third face image and the target image, is specifically configured to:
extracting the face key points of the third face image and the target key points of the target image;
determining the actual angle between the face in the third face image and the face in the target image according to the face key points and the target key points;
and determining first display information according to the actual angle and a target angle, wherein the target angle is an angle meeting the fourth preset condition.
In combination with any embodiment provided by the present disclosure, when the instruction module is configured to acquire the second face image of the target object at least at one angle, the instruction module is specifically configured to:
acquiring a second face image of the target object at a first angle, wherein the first angle is an angle matched with the orientation of the image acquisition equipment; and/or
And acquiring a second face image of at least one second angle of the target object, wherein the second angle is an angle meeting a fifth preset condition with the angle of the face in the target image.
In combination with any one of the embodiments provided by the present disclosure, the face fusion apparatus further includes an authority verification module, configured to:
acquiring a permission verification result of the target object;
and when the authority verification result of the target object indicates that the target object is a legal user, acquiring a first face image of the target object.
With regard to the apparatus in the above-mentioned embodiments, the specific manner in which each module performs the operation has been described in detail in the first aspect with respect to the embodiment of the method, and will not be elaborated here.
In a third aspect, at least one embodiment of the present invention provides an electronic device, please refer to fig. 6, which shows a structure of the device, where the device includes a memory for storing computer instructions executable on a processor, and the processor is configured to perform face fusion based on the method according to any one of the first aspect when the computer instructions are executed.
In a fourth aspect, at least one embodiment of the invention provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, performs the method of any of the first aspects.
In the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (16)

1. A face fusion method is characterized by comprising the following steps:
acquiring a first face image of a target object, wherein the target object is an object in an image acquisition range;
according to the first person image, at least one person image with the similarity meeting a first preset condition with the first person image is obtained, and the at least one person image is displayed;
determining at least one target image from the at least one character image according to a first instruction of the target object, and acquiring a second face image of the target object;
and obtaining at least one fused image according to the at least one target image and the second face image, and displaying the at least one fused image.
2. The method according to claim 1, wherein obtaining at least one fused image according to the at least one target image and the second face image comprises:
extracting a first face feature of the second face image and a second face feature of the at least one target image, and generating at least one fusion feature according to the first face feature and the second face feature;
and mapping the at least one fusion feature to a face region of the at least one target image, and/or mapping the at least one fusion feature to a face region of the second face image to obtain at least one fusion image.
3. The method of claim 2, wherein the extracting the first facial features of the second facial image comprises:
under the condition that a plurality of second face images are obtained, selecting second face images meeting second preset conditions from the plurality of second face images, extracting face features of the selected second face images, and taking each extracted face feature as a first face feature; or respectively extracting the face features of the second face images, and fusing the extracted face features to obtain a first face feature;
the mapping the at least one fused feature to a face region of the second face image comprises:
and under the condition of acquiring a plurality of second face images, selecting the second face images meeting a third preset condition from the plurality of second face images, and mapping each fusion feature to the face area of each selected second face image.
4. The method according to claim 2 or 3, wherein the extracting the second face feature of the at least one target image comprises:
extracting the face features of each target image, and taking each extracted face feature as a second face feature; or extracting the face features of each target image, and fusing the extracted face features to obtain a second face feature;
generating at least one fused feature from the first and second facial features, comprising:
fusing each first face feature with each second face feature to obtain at least one fused feature;
the mapping the at least one fused feature to the face region of the at least one target image comprises:
and mapping each fusion feature to the face region of each target image in the at least one target image respectively.
5. The method of claim 4, wherein the extracting the face features of each target image comprises:
and extracting the feature of the specified area of the target image as the face feature of the target image according to a second instruction of the target object, wherein the second instruction is used for indicating the position of the specified area.
6. The face fusion method according to any one of claims 1 to 5, wherein the acquiring at least one person image whose similarity with the first person image satisfies a first preset condition comprises:
at least one person image with the similarity meeting a first preset condition with the first face image is obtained from at least one type of image in a preset image library, wherein the image library comprises at least one type of image.
7. The face fusion method of claim 6, wherein the image library comprises at least one of the following types of images: an image of a historical visiting subject of a target location, a celebrity representation, and an image of a work within the target location.
8. The face fusion method according to any one of claims 1 to 7, wherein the displaying the at least one human image comprises:
displaying the person image and displaying the similarity of the person image and the first face image; and/or the presence of a gas in the gas,
and displaying the figure image and the identification information corresponding to the figure image.
9. The face fusion method according to any one of claims 1 to 5, wherein the obtaining of the second face image of the target object comprises:
determining the first face image as a second face image; and/or the presence of a gas in the gas,
and performing at least one of the following processing on the first face image, and determining the processed image as a second face image, wherein the at least one processing comprises cutting, rendering, zooming, rotating and sharpness adjusting.
10. The face fusion method according to any one of claims 1 to 5 and 9, wherein the acquiring a second face image of the target object comprises:
under the condition that the target object enters the image acquisition range, acquiring a third face image of the target object, and generating first prompt information according to the third face image and the target image, wherein the first prompt information is used for prompting the target object to adjust a face angle;
and responding to that the face angle of the target object meets a fourth preset condition, and acquiring a second face image of at least one angle of the target object.
11. The method of claim 10, wherein the generating first prompt information according to the third face image and the target image comprises:
extracting the face key points of the third face image and the target key points of the target image;
determining the actual angle between the face in the third face image and the face in the target image according to the face key points and the target key points;
and determining first display information according to the actual angle and a target angle, wherein the target angle is an angle meeting the fourth preset condition.
12. The method according to claim 10 or 11, wherein the obtaining of the second face image of at least one angle of the target object comprises:
acquiring a second face image of the target object at a first angle, wherein the first angle is an angle matched with the orientation of the image acquisition equipment; and/or
And acquiring a second face image of at least one second angle of the target object, wherein the second angle is an angle meeting a fifth preset condition with the angle of the face in the target image.
13. The face fusion method according to any one of claims 1 to 12, further comprising:
acquiring a permission verification result of the target object;
and when the authority verification result of the target object indicates that the target object is a legal user, acquiring a first face image of the target object.
14. A face fusion device, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first face image of a target object, and the target object is an object in an image acquisition range;
the display module is used for acquiring at least one person image with the similarity meeting a first preset condition with the first person image according to the first person image and displaying the at least one person image;
the instruction module is used for determining at least one target image from the at least one character image according to a first instruction of the target object and acquiring a second face image of the target object;
and the fusion module is used for obtaining at least one fusion image according to the at least one target image and the second face image and displaying the at least one fusion image.
15. An electronic device, comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of any one of claims 1 to 13 when executing the computer instructions.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 13.
CN202011582832.0A 2020-12-28 2020-12-28 Face fusion method, device, equipment and storage medium Pending CN112488085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011582832.0A CN112488085A (en) 2020-12-28 2020-12-28 Face fusion method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011582832.0A CN112488085A (en) 2020-12-28 2020-12-28 Face fusion method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112488085A true CN112488085A (en) 2021-03-12

Family

ID=74915839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011582832.0A Pending CN112488085A (en) 2020-12-28 2020-12-28 Face fusion method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112488085A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022213798A1 (en) * 2021-04-08 2022-10-13 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium
CN115348709A (en) * 2022-10-18 2022-11-15 良业科技集团股份有限公司 Smart cloud service lighting display method and system suitable for text travel

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993102A (en) * 2019-03-28 2019-07-09 北京达佳互联信息技术有限公司 Similar face retrieval method, apparatus and storage medium
CN111339420A (en) * 2020-02-28 2020-06-26 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111667588A (en) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 Person image processing method, person image processing device, AR device and storage medium
CN111768479A (en) * 2020-07-29 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, computer device, and storage medium
CN111967397A (en) * 2020-08-18 2020-11-20 北京字节跳动网络技术有限公司 Face image processing method and device, storage medium and electronic equipment
CN111986076A (en) * 2020-08-21 2020-11-24 深圳市慧鲤科技有限公司 Image processing method and device, interactive display device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993102A (en) * 2019-03-28 2019-07-09 北京达佳互联信息技术有限公司 Similar face retrieval method, apparatus and storage medium
CN111339420A (en) * 2020-02-28 2020-06-26 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111667588A (en) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 Person image processing method, person image processing device, AR device and storage medium
CN111768479A (en) * 2020-07-29 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, computer device, and storage medium
CN111967397A (en) * 2020-08-18 2020-11-20 北京字节跳动网络技术有限公司 Face image processing method and device, storage medium and electronic equipment
CN111986076A (en) * 2020-08-21 2020-11-24 深圳市慧鲤科技有限公司 Image processing method and device, interactive display device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022213798A1 (en) * 2021-04-08 2022-10-13 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium
CN115348709A (en) * 2022-10-18 2022-11-15 良业科技集团股份有限公司 Smart cloud service lighting display method and system suitable for text travel

Similar Documents

Publication Publication Date Title
CN108616563B (en) Virtual information establishing method, searching method and application system of mobile object
CN109688451B (en) Method and system for providing camera effect
CN108600632B (en) Photographing prompting method, intelligent glasses and computer readable storage medium
CN110662083A (en) Data processing method and device, electronic equipment and storage medium
CN108108012B (en) Information interaction method and device
CN115735229A (en) Updating avatar garments in messaging systems
CN116601675A (en) Virtual garment fitting
US20170302662A1 (en) Account information obtaining method, terminal, server and system
KR20230107655A (en) Body animation sharing and remixing
CN114930399A (en) Image generation using surface-based neurosynthesis
CN107168619B (en) User generated content processing method and device
TWI617930B (en) Method and system for sorting a search result with space objects, and a computer-readable storage device
CN107084740B (en) Navigation method and device
CN112488085A (en) Face fusion method, device, equipment and storage medium
US20200257121A1 (en) Information processing method, information processing terminal, and computer-readable non-transitory storage medium storing program
US11410396B2 (en) Passing augmented reality content between devices
CN115867882A (en) Travel-based augmented reality content for images
WO2022066914A1 (en) Augmented reality content items including user avatar to share location
CN107146275B (en) Method and device for setting virtual image
CN115956255A (en) 3D reconstruction using wide-angle imaging device
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN111598824A (en) Scene image processing method and device, AR device and storage medium
CN115697508A (en) Game result overlay system
CN115812217A (en) Travel-based augmented reality content for reviews
CN116235194A (en) Media content delivery and management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination