CN111340690B - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111340690B
CN111340690B CN202010208723.6A CN202010208723A CN111340690B CN 111340690 B CN111340690 B CN 111340690B CN 202010208723 A CN202010208723 A CN 202010208723A CN 111340690 B CN111340690 B CN 111340690B
Authority
CN
China
Prior art keywords
target object
modification
target
image
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010208723.6A
Other languages
Chinese (zh)
Other versions
CN111340690A (en
Inventor
刘莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010208723.6A priority Critical patent/CN111340690B/en
Publication of CN111340690A publication Critical patent/CN111340690A/en
Priority to JP2022549506A priority patent/JP2023514340A/en
Priority to PCT/CN2020/132994 priority patent/WO2021189927A1/en
Priority to US17/820,026 priority patent/US20220392253A1/en
Application granted granted Critical
Publication of CN111340690B publication Critical patent/CN111340690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure relates to an image processing method, apparatus, electronic device, and storage medium, the method including, when an object recognition result for a target object in an image changes, acquiring a last modification form of the target object before the change occurs as a start modification form, acquiring a target modification form corresponding to the change, and displaying the target object in a gradation manner with the start modification form and the target modification form as a start form and an end form of the gradation, respectively. According to the scheme, the modification processing of dynamic adaptation can be carried out for the target object according to the dynamic change of the object identification result, the target object displayed on the display interface is gradually transited from the initial modification form to the target modification form through the gradual change process of gradual change from the initial modification form to the target modification form of the target object, the phenomenon that the picture is flickering due to the mutation of modification effect caused by the fact that the target object suddenly appears or is lost in an image is avoided, and the modification processing effect of the target object is optimized.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
With the development of image processing technology, a technology of performing modification processing on an image has emerged, which can perform modification processing on a target object such as a face, an arm, or the like included in an image to be processed, so as to achieve an effect of optimally displaying the target object of the image.
In the related art, modification processing is performed on a target object, such as a face, mainly by identifying key points of the target object, such as the face, so as to achieve modification effects, such as a thin face. However, this technique relies on identifying key points of the target object, and when the key points are inaccurate or lost, the modification effect for the target object is directly lost, and an unprocessed image of the target object is directly displayed on the display interface; when the target object reappears, the modification processing is carried out on the current target object identification key points, and the final modification effect is also directly taken effect again on the display interface, so that the picture flickering can be caused, and the technology is shown that the face can be suddenly restored or thinned on the display interface due to the loss or reappearance of the key points, so that the face thinning effect is influenced. Thus, this technique has a problem that the modification processing effect on the target object in the image is poor.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device, an electronic device and a storage medium, so as to at least solve the problem of poor modification processing effect on a target object in an image in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an image processing method including:
When the object identification result of a target object in an image changes, acquiring the last modification form of the target object before the change occurs as the initial modification form of the target object; the target object is an object of modification processing;
acquiring a target modification form of the target object corresponding to the change;
And respectively taking the initial deformation form and the target deformation form as a gradual change initial form and a gradual change end form, and displaying the target object in a gradual change mode.
In an exemplary embodiment, the displaying the target object in a gradual manner with the starting shape and the target shape serving as a gradual starting shape and a gradual ending shape, respectively, includes:
obtaining the deformation amplitude of the target object according to the initial deformation form and the target deformation form; the variation amplitude is the variation amplitude of the variation morphology of the target object; and displaying the target object according to the gradual change speed which is suitable for the variation amplitude.
In an exemplary embodiment, the obtaining the modification amplitude of the target object according to the initial modification form and the target modification form includes:
According to the initial modification form and the target modification form, respectively obtaining an initial position and a target position of a key point of the target object on the image; the key points are pixel points on the image, which are used for carrying out modification treatment on the target object; the amplitude of the variation is derived from the distance between the starting position and the target position.
In an exemplary embodiment, before the displaying of the target object according to the gradual change speed adapted to the modification amplitude, the method further includes:
Acquiring a preset modification duration threshold of the target object; obtaining a variation speed threshold according to the variation amplitude and the variation duration threshold; and acquiring the gradual change speed which is not larger than the modification speed threshold value as the gradual change speed which is suitable for the modification amplitude.
In an exemplary embodiment, the fade rate is a constant value that is no greater than the variant rate threshold.
In an exemplary embodiment, the presenting the target object at a fade rate that is adapted to the magnitude of the modification includes:
Gradually moving the key point of the target object from the initial position of the key point on the image to the target position of the key point on the image according to the gradual change speed so as to show that the target object gradually changes from the initial shape to the end shape; the starting position and the target position of the key point on the image are obtained according to the starting deformation form and the target deformation form respectively.
In an exemplary embodiment, when the object recognition result for the target object in the image changes, the method further includes, before obtaining the last modification form of the target object before the change occurs as the initial modification form of the target object:
Acquiring an object recognition result aiming at a target object in the image; the object recognition result includes: identifying a target object in the image or not identifying a target object in the image; and judging that the object recognition result is changed when the object recognition result is changed from the target object recognized in the image to the target object not recognized in the image or when the object recognition result is changed from the target object not recognized in the image to the target object recognized in the image.
In an exemplary embodiment, the target modification form is a final modification form of modification processing of the target object triggered by the change of the object recognition result.
In an exemplary embodiment, when the change of the object recognition result is from the recognition of the target object in the image to the non-recognition of the target object in the image, the target modification form is a form in which the target object is not modified; when the object recognition result changes from not recognizing the target object in the image to recognizing the target object in the image, the target modification form is a form in which the target object is subjected to complete modification processing.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
A starting form obtaining module, configured to obtain, when an object recognition result for a target object in an image changes, a final modification form of the target object before the change occurs, as a starting modification form of the target object; the target object is an object of modification processing;
the target form acquisition module is used for acquiring a target variant form of the target object corresponding to the change;
and the object display module is used for displaying the target object in a gradual change mode by taking the initial deformation form and the target deformation form as a gradual change initial form and a gradual change ending form respectively.
In an exemplary embodiment, the object presentation module includes:
an amplitude obtaining unit, configured to obtain a modification amplitude of the target object according to the initial modification form and the target modification form; the variation amplitude is the variation amplitude of the variation morphology of the target object; and the object display unit is used for displaying the target object according to the gradual change speed which is adaptive to the variation amplitude.
In an exemplary embodiment, the amplitude obtaining unit is further configured to obtain a starting position and a target position of a key point of the target object on the image according to the starting variation form and the target variation form, respectively; the key points are pixel points on the image, which are used for carrying out modification treatment on the target object; the amplitude of the variation is derived from the distance between the starting position and the target position.
In an exemplary embodiment, the object displaying unit is further configured to obtain a preset modification duration threshold of the target object before displaying the target object according to the gradual change speed adapted to the modification amplitude; obtaining a variation speed threshold according to the variation amplitude and the variation duration threshold; and acquiring the gradual change speed which is not larger than the modification speed threshold value as the gradual change speed which is suitable for the modification amplitude.
In an exemplary embodiment, the fade rate is a constant value that is no greater than the variant rate threshold.
In an exemplary embodiment, the object displaying unit is further configured to gradually move, according to the gradient speed, a key point of the target object from a start position of the key point on the image to a target position of the key point on the image, so as to display that the target object is gradually changed from the start shape to the end shape; the starting position and the target position of the key point on the image are obtained according to the starting deformation form and the target deformation form respectively.
In an exemplary embodiment, the image processing apparatus further includes:
A result acquisition unit configured to acquire an object recognition result for a target object in the image; the object recognition result includes: identifying a target object in the image or not identifying a target object in the image; and the change judging unit is used for judging that the object recognition result is changed when the object recognition result is changed from the object recognition in the image to the object recognition result in the image or when the object recognition result is changed from the object recognition result in the image to the object recognition result in the image.
In an exemplary embodiment, the target modification form is a final modification form of modification processing of the target object triggered by the change of the object recognition result.
In an exemplary embodiment, when the change of the object recognition result is from the recognition of the target object in the image to the non-recognition of the target object in the image, the target modification form is a form in which the target object is not modified; when the object recognition result changes from not recognizing the target object in the image to recognizing the target object in the image, the target modification form is a form in which the target object is subjected to complete modification processing.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the image processing method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the image processing method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of a device reads and executes the computer program, causing the device to perform the image processing method as described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
When detecting that the object identification result of the target object in the image changes, acquiring the last modification form of the target object before the change occurs as the initial modification form of the target object, and acquiring the target modification form of the target object corresponding to the change, so that the initial modification form and the target modification form are respectively used as the initial form and the end form of gradual change, and displaying the target object in a gradual change mode. According to the scheme, the modification processing of dynamic adaptation can be carried out for the target object according to the dynamic change of the object identification result, the target object displayed on the display interface is gradually transited from the initial modification form to the target modification form through the gradual change process of gradual change from the initial modification form to the target modification form of the target object, the phenomenon that the picture is flickering due to the mutation of modification effect caused by the fact that the target object suddenly appears or is lost in an image is avoided, and the modification processing effect of the target object is optimized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is an application environment diagram illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 3 is an effect diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 4 is a flowchart illustrating a method of exposing a target object, according to an example embodiment.
Fig. 5 is a flow chart illustrating a method of acquiring a magnitude of a variation according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating a method of acquiring a fade rate according to an exemplary embodiment.
Fig. 7 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 8 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Fig. 9 is an internal structural diagram of an electronic device, which is shown according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The image processing method provided by the present disclosure may be applied to an application environment as shown in fig. 1, fig. 1 is an application environment diagram of an image processing method according to an exemplary embodiment, where a terminal 100 may be included in the application environment, and the terminal 100 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
The terminal 100 may perform modification processing on the face 11 of the user 10 based on the image of the user 10 acquired by the camera configured thereon, for example, may perform face thinning processing on the image corresponding to the face 11 of the user 10 by using a liquefied modification processing manner based on key points of the face in the acquired image, so that the terminal 100 displays the face thinned image on its display interface.
Specifically, the terminal 100 may collect the image of the user 10 in real time, take the face as the target object of modification processing, and specifically perform face thinning processing on the face, the terminal 100 may display a first image 111 subjected to face thinning processing, where the first image 111 may include the face 12 subjected to face thinning processing, and when the first image 111 has the mask 20, as shown in the second image 112, the mask 20 forms a certain degree of mask on the target object, that is, the face 12, so that the face recognition model of the terminal 100 cannot recognize the face 12, in this case, the face thinning effect on the face 12 in the related art fails, but directly displays a third image 113 on the display interface, where the face 13 in the third image 113 is a face not subjected to face thinning processing, and in addition, when the mask 20 is removed from the third image 113, as shown in the fourth image 114, the terminal 100 may re-recognize the face 13 in the fourth image 114, in this case, perform face thinning processing again on the face 13, and directly display the face 115 on the display interface thereof, where the face 13 is not subjected to face thinning processing, and the face 12 is not subjected to face thinning processing. Thus, in the related art, when a target object such as a human face is lost from an image or reappears, a flicker is caused, and the effect of performing the shaping process on the target object is poor.
According to the image processing method provided by the disclosure, the terminal 100 can firstly determine whether the object recognition result of the target object in the image changes, when the object recognition result of the target object in the image changes, the last modification form of the target object before the change occurs is obtained as the initial modification form of the target object, the target modification form of the target object corresponding to the change is also obtained, and then the terminal 100 can display the target object in a gradual manner by taking the initial modification form and the target modification form as the gradual-change initial form and the gradual-change end form on the display interface of the terminal.
As described specifically with reference to fig. 1, in the image processing method provided by the present disclosure, when the face 12 in the second image 112 of the terminal 100 is blocked by the blocking object 20, the recognition result of the object is determined to change, that is, from being able to recognize the face on the image to being unable to recognize the face on the image any more, as shown in the second image 112, so that the terminal 100 may take the modified form of the face 12 before the change in the first image 111 as the initial modified form of the face 12, which may be the final form of the thin face, and may take the original form of the face 12 before the thin face processing as the target modified form, which corresponds to the modified form of the face 13 of the third image 113, in which case the terminal 100 may take the initial modified form and the target modified form as the modified initial form and the end form of the gradual change, respectively, and display the face on the display interface thereof in a gradual manner, that is the gradual progress from the modified form of the face 12 of the second image 112 to the modified form of the face 13 of the third image 113.
Accordingly, when the mask 20 in the third image 113 is removed, the terminal 100 cannot recognize the face on the image to be changed to recognize the face on the image, as shown in the fourth image 114, whereby the terminal 100 can take the deformation form of the face 13 before the change in the third image 113 as the initial deformation form of the face 13, which may be the original form of the face before the thinning process, and can take the final form of the thin face as the target deformation state, which corresponds to the deformation form of the face 12 of the fifth image 115, in which case the terminal 100 can display the face on its display interface in a gradual manner, i.e., display the gradual change process from the deformation form of the face 13 of the fourth image 114 to the deformation form of the face 12 of the fifth image 115, thereby avoiding flickering caused by the deformation effect mutation due to the sudden appearance or loss of the target object such as the human face in the image, and optimizing the deformation process effect of the target object.
The image processing method of the present disclosure is described below by way of exemplary embodiments.
Fig. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment, which may be applied to the terminal 100 shown in fig. 1 as shown in fig. 2, and may include the following steps.
In step S201, when the object recognition result for the target object in the image changes, the last modification form of the target object before the change occurs is acquired as the initial modification form of the target object.
In this step, the terminal 100 may acquire an image of each frame and identify a target object in the image while photographing a video or live broadcast in real time. Wherein the target object in the image is an object of the finger-type process, which may be, for example, a face, an arm, an eye, a chin, or the like of a person.
When the terminal 100 detects that the object recognition result of the target object in the image has changed, the last modification form of the target object before the change occurs is acquired as the initial modification form of the target object. The initial modification form may be an original form of the target object which is not subjected to modification treatment, a final form of the target object which is subjected to complete modification treatment, or an intermediate form between the original form and the final form. Specifically, if the final modification form of the target object is the original form before the change occurs, the terminal 100 takes the original form as the initial modification form of the target object, and if the final modification form of the target object is the final form before the change occurs, the terminal 100 takes the final form as the initial modification form of the target object.
In an exemplary embodiment, before the step S201, it may further be determined whether the object identification result is changed by the following steps, which specifically include:
First, the terminal 100 may acquire an object recognition result for a target object in an image in real time; the object recognition result may include recognition of the target object in the image or non-recognition of the target object in the image. For example, the terminal 100 may recognize a corresponding target object such as a face in an image through an object recognition model such as a face recognition model and obtain an object recognition result, for example, the target object is recognized in the image or the target object is not recognized in the image.
Then, when the terminal 100 detects that the object recognition result changes from recognizing the target object in the image to not recognizing the target object in the image, or when the object recognition result changes from not recognizing the target object in the image to recognizing the target object in the image, the terminal 100 may determine that the object recognition result changes.
If the terminal 100 detects that the object recognition result is changed from that the target object is recognized in the image to that the target object is not recognized in the image, it means that the terminal 100 can recognize that the target object is located in the image, and then the terminal 100 may fail to recognize the target object in the image due to factors such as shielding, and at this time, the terminal 100 determines that the object recognition result is changed. In addition, if the terminal 100 detects that the object recognition result is changed from that the target object is not recognized in the image to that the target object is recognized in the image, it means that the terminal 100 may not recognize the target object in the image due to factors such as shielding, and then may remove the shielding, and the terminal 100 may be able to recognize that the target object is in the image again, and at this time, the terminal 100 may also determine that the object recognition result is changed.
Taking a face as a target in an image photographed by the terminal 100 in real time as an example, the terminal 100 may display the photographed face on its screen in real time, and the terminal 100 may track the face position of each frame, mark key points of the face, and perform a face-beautifying process. When the person moves relatively fast, suddenly turns around, suddenly moves out of the screen, or is blocked by the object, the corresponding frame image actually loses the key point of the face, that is, the terminal 100 cannot identify the key point of the face on the corresponding frame image, so that the obtained object identification result is that the face is not identified on the image, and when, for example, the face moves back into the screen from the outside of the screen, the terminal 100 can detect the face again in the image, the object identification result obtained by the terminal 100 is that the face is identified on the image, and it can be judged that the object identification result is changed.
Step S202, a target modification form of the target object corresponding to the change is acquired.
In this step, the terminal 100 may acquire a target modification form of the target object corresponding to the change.
For example, if the modification form of the target object before the change is the original form of the target object which has not been subjected to the modification treatment, the target modification form corresponding to the change is the final form of the target object which has been subjected to the complete modification treatment; if the deformation form of the target object before the change is the final form of the target object after the complete deformation treatment, the target deformation form is the original form of the target object after the deformation treatment; if the modification form of the target object before the change is an intermediate form between the original form and the final form, the reduced form corresponding to the intermediate form before the change is generated may be regarded as the target modification form, specifically, if the modification form of the target object before the change is changed from the original form to the intermediate form, the reduced form corresponding to the intermediate form before the change is generated is the original form, that is, the original form is regarded as the target modification form, and similarly, if the modification form of the target object before the change is changed from the final form to the intermediate form, the reduced form corresponding to the intermediate form before the change is generated is regarded as the final form, that is, the final form is regarded as the target modification form.
Step S203, the initial modification form and the target modification form are respectively used as a gradual change initial form and a gradual change end form, and the target object is displayed in a gradual change mode.
In this step, the terminal 100 may display the target object in a progressive form on its display interface. Wherein the fade has a start morphology and an end morphology, the terminal 100, after acquiring the start morphology and the target morphology, may take the start morphology as the start morphology of the fade and the target morphology as the end morphology, and display the target object as the fade from the start morphology to the target morphology.
As shown in fig. 3, fig. 3 is an effect diagram of an image processing method according to an exemplary embodiment, in which a face 30 in a sixth image 311 is a target object, the face 30 is blocked by a blocking object 40, and the terminal 100 displays the face 30 in a gradual manner. Specifically, assuming that the initial modification of the face 30 in the sixth image 311 is shown as a first modification 31, the first modification 31 may be a final modification of the face 30 after the complete modification, when the face 30 under the first modification 31 is blocked by the blocking object 40, the terminal 100 takes the fourth modification 34 as a target modification, where the target modification is an original modification of the face 30 without modification, and, as the second modification 32 and the third modification 33 are exemplary two possible intermediate modifications between the original modification and the final modification, when the terminal 100 displays the target object, a specific gradual transition is performed by gradually transitioning the face 30 from the first modification 31 to the fourth modification 34 through the intermediate modifications, that is, the second modification 32 and the third modification 33, and vice versa.
During the gradual transition, the mask 40 is removed, the terminal 100 may restore the variant of the face 30, and, illustratively, when the gradual transition is performed by the terminal 100 to gradually transition the face 30 from the first variant 31 to the fourth variant 34, which is the intermediate variant of the face 30, namely, the second variant 32 and the third variant 33, assuming that when the face 30 is gradually changed to the third variant 33, the mask 40 is removed, the terminal 100 may take the reduced variant corresponding to the third variant 33 before the removal, namely, the first variant 31, as the target variant, and the third variant 33 itself as the initial variant, and display the face 30 gradually changing from the third variant 33 to the first variant 31. By the method, gradual change face thinning can be carried out under the conditions that the face is lost from the image and appears in the image again, and the like, the process can be controlled to be a continuous process with slow and slight change, so that a user of the terminal 100 cannot observe obvious change, the picture is prevented from flickering due to dynamic adaptation of modification effects according to dynamic changes of target objects such as the face in the image, the phenomenon that the picture is flickering due to mutation of modification effects caused by sudden appearance or loss of the target objects in the image is avoided, and the phenomenon that stuffing is exposed when the face is subjected to the shaping processing from the absence to the presence or from the presence to the absence of the face can be avoided under the scene of the shaping processing, and the modification processing effect is optimized.
In the above image processing method, when the terminal 100 detects that a change occurs in the object recognition result for the target object in the image, the terminal 100 acquires the last modification form of the target object before the occurrence of the change as the initial modification form of the target object, and acquires the target modification form of the target object corresponding to the change, so that the terminal 100 displays the target object in a gradual manner with the initial modification form and the target modification form as the initial form and the end form of the gradual change, respectively. According to the scheme, the modification processing of dynamic adaptation can be carried out for the target object according to the dynamic change of the object identification result, the target object displayed on the display interface of the terminal 100 is gradually transited from the initial modification form to the target modification form through the gradual change process of gradual change from the initial modification form to the target modification form of the target object, the phenomenon that the picture is flickering due to mutation of modification effects caused by sudden appearance or loss of the target object in an image is avoided, and the modification processing effect of the target object is optimized.
In an exemplary embodiment, the target modification form in step S202 may be a final modification form of modification processing on the target object triggered by a change in the object recognition result. Further, when the change of the object recognition result is from the recognition of the target object in the image to the non-recognition of the target object in the image, the target modification form is a form in which the target object is not subjected to modification processing; when the change of the object recognition result is changed from the fact that the target object is not recognized in the image to the fact that the target object is recognized in the image, the target modification form is a form in which the target object is subjected to complete modification processing.
Taking a human face as a target object as an example, when the face recognition result changes from recognizing the human face in the image to not recognizing the human face in the image, the target variant form can be in a state without a thin face effect; when the face recognition result changes from not recognizing the face in the image to recognizing the face in the image, the target variant form can be the final face thinning effect of the face.
In an exemplary embodiment, as shown in fig. 4, fig. 4 is a flowchart of a method for displaying a target object according to an exemplary embodiment, in step S203, a starting modification form and a target modification form are respectively used as a starting form and an ending form of gradual change, and the target object is displayed in a gradual manner, which may be implemented by the following steps:
step S401, obtaining the deformation amplitude of the target object according to the initial deformation form and the target deformation form.
The terminal 100 may obtain the modification amplitude of the target object according to the initial modification form and the target modification form of the target object; the variation amplitude refers to the variation amplitude of the variation form of the target object.
Step S402, displaying the target object according to the gradual change speed which is suitable for the variation amplitude.
In this step, the terminal 100 may determine an appropriate gradation speed according to the magnitude of the variation amplitude. Generally, the larger the variation amplitude, the larger the gradation speed is correspondingly, and the smaller the variation amplitude, the smaller the gradation speed is. Wherein the magnitude of the amplitude of the variation is related to the initial variation and the target variation.
Two of these cases were analyzed, the first of which is described in connection with fig. 3: the initial variant is the first variant 31, the target variant is the fourth variant 34, the second case is: the initial variant is the third variant 33 and the target variant is the first variant 31. In this regard, the first case corresponds to a larger amplitude variation than the second case, because it is to be deformed from the final to the original configuration, whereas the second case only requires a change from the intermediate to the final configuration. In this way, the terminal 100 can set the corresponding gradient speeds for the two kinds of variation amplitudes respectively, and the gradient speed set in the first case is slightly faster than the gradient speed set in the second case, so as to realize that when the target object is in different initial variation forms, the corresponding gradient speeds can be flexibly adapted according to the corresponding variation amplitudes thereof to display the gradient process of the target object.
In an exemplary embodiment, as shown in fig. 5, fig. 5 is a flowchart illustrating a method for obtaining a modification amplitude according to an exemplary embodiment, and the obtaining the modification amplitude of the target object according to the initial modification form and the target modification form in step S401 may specifically include:
Step S501, according to the initial modification form and the target modification form, respectively obtaining the initial position and the target position of the key point of the target object on the image.
The key points of the target object are pixel points used for carrying out modification processing on the target object on the image. In a specific application scenario, the terminal 100 may identify and mark a key point of a target object, such as a face, on an image, and perform modification processing, such as liquefaction modification, on the target object through the key point, so as to achieve modification effects, such as face beautification.
In this step, the terminal 100 may obtain the starting position of the key point of the target object on the image according to the starting modification form of the target object, and obtain the target position of the key point of the target object on the image according to the target modification form of the target object.
Step S502, obtaining the variation amplitude according to the distance between the starting position and the target position.
This step is mainly that the terminal 100 may take the distance between the starting position of the key point of the target object on the image and the target position as the variation amplitude of the target object. Wherein the larger the distance between the start position and the target position, the larger the deformation amplitude of the target object, the terminal 100 needs to perform gradual deformation of the target object with a relatively large amplitude, and the smaller the distance between the start position and the target position, the smaller the deformation amplitude of the target object, i.e. the terminal 100 needs to perform gradual deformation of the target object with a relatively small amplitude.
According to the scheme of the embodiment, the deformation amplitude can be quantified according to the distance between the starting position and the target position of the key point, the key point of the target object is usually fixed, the number of the pixel points of the target object is relatively small compared with the number of the pixel points of the whole image, so that the deformation amplitude of the target object can be accurately and efficiently determined, and the deformation processing effect of the target object is optimized.
In an exemplary embodiment, as shown in fig. 6, fig. 6 is a flowchart illustrating a method for obtaining a fade rate according to an exemplary embodiment, before displaying a target object according to the fade rate adapted to the modification amplitude in step S502, the method may further include the following steps:
step S601, acquiring a preset modification duration threshold of a target object;
step S602, obtaining a modification speed threshold according to the modification amplitude and the modification duration threshold;
Step S603, acquires a gradation speed not greater than the modification speed threshold as a gradation speed adapted to the modification amplitude.
In this embodiment, the terminal 100 may obtain a preset modification duration threshold for the target object. Wherein the modification duration threshold may be the longest time of the gradient process of the target object, and may be used to limit the gradient speed of the target object such that the gradient process of the target object is relatively continuous and the change is performed slowly and slightly, and in some embodiments, the modification speed threshold may be set to 2 seconds. The terminal 100 may then derive a modification speed threshold from the modification amplitude and the modification duration threshold. Specifically, the terminal 100 may take the ratio of the modification amplitude to the modification time period threshold as the modification speed threshold. For example, the modification amplitude may be a distance between a start position of a key point of the target object on the image and the target position, and a ratio of the distance to the modification speed threshold is used as the modification speed threshold.
After obtaining the modification speed threshold, the terminal 100 may obtain an appropriate gradation speed according to the modification speed threshold. In this embodiment, in order to make it possible for the target object to be graded from the start variation to the target variation at a slower speed, the terminal 100 may use a gradation speed not greater than the variation speed threshold as a gradation speed adapted to the variation amplitude. Further, in some embodiments, the fade rate may be a constant value not greater than the above modification rate threshold, so that the target object is faded from the initial modification to the target modification at a constant speed, further making the fade process more continuous and slight, and in order to facilitate setting of the fade rate, the terminal 100 may also directly use the modification rate threshold as the fade rate.
According to the technical scheme of the embodiment, the gradual change speed which is suitable for the deformation amplitude is determined by setting the reasonable deformation duration threshold value combined amplitude, so that the process of displaying the target object in a gradual manner based on the gradual change speed is a continuous and relatively slow and slight change process, the gradual change process is difficult to be observed obviously, the phenomenon that a picture is flickering on a display interface of a terminal is avoided, and the deformation effect of the target object is optimized.
In an exemplary embodiment, presenting the target object at the fade rate that is adapted to the magnitude of the variation in step S402 includes:
And gradually moving the key point of the target object from the initial position of the key point on the image to the target position of the key point on the image according to the gradual change speed so as to show that the target object gradually changes from the initial shape to the end shape.
In this embodiment, the terminal 100 mainly moves the position of the key point of the target object on the image, so as to implement the gradual change process from the initial form to the end form of the target object. The positions of the key points of the target object on the image comprise a starting position and a target position, and the two positions can be respectively obtained according to a starting deformation form and a target deformation form, wherein the starting position can be obtained according to the starting deformation form, and the target position can be obtained according to the target deformation form.
According to the scheme of the embodiment, a proper gradual change speed can be set for a gradual movement process of gradually moving the key point from the initial position to the target position on the image according to the distance between the initial position and the target position of the key point of the target object on the image, the terminal 100 can obtain a preset modification duration threshold of the target object, the ratio of the distance to the modification duration threshold is used as the gradual change speed, and the key point of the target object is gradually moved from the initial position to the target position on the image according to the gradual change speed, so that the gradual change process of the target object from the initial form to the end form is realized.
In an exemplary embodiment, there is also provided an image processing method, as shown in fig. 7, fig. 7 is a flowchart illustrating an image processing method according to an exemplary embodiment, in which the method is applied to the terminal 100 shown in fig. 1 for explanation, the method including the steps of:
In step S701, the terminal 100 acquires an object recognition result for a target object in an image.
In this step, the target object is an object of the finger type processing, and the object recognition result includes: the target object is identified in the image or not identified in the image.
In step S702, when the object recognition result changes from recognizing the target object in the image to not recognizing the target object in the image, or when the object recognition result changes from not recognizing the target object in the image to recognizing the target object in the image, the terminal 100 determines that the object recognition result changes.
In step S703, when the object recognition result for the target object in the image changes, the terminal 100 acquires the last modification form of the target object before the change occurs as the initial modification form of the target object.
In step S704, the terminal 100 acquires a target modification form of the target object corresponding to the change.
In step S705, the terminal 100 obtains the starting position and the target position of the key point of the target object on the image according to the starting modification form and the target modification form, respectively.
In this step, the key points are pixel points on the image for performing modification processing on the target object.
In step S706, the terminal 100 obtains a variation amplitude according to the distance between the start position and the target position.
Wherein the terminal 100 may take the distance between the start position and the target position as the magnitude of the variation.
In step S707, the terminal 100 acquires a preset modification time period threshold of the target object.
In step S708, the terminal 100 obtains a modification speed threshold according to the modification amplitude and the modification duration threshold.
In this step, the terminal 100 may use the ratio of the modification amplitude to the modification time period threshold as the modification speed threshold.
Step S709, a gradation speed not greater than the modification speed threshold is acquired as a gradation speed adapted to the modification amplitude.
Wherein the terminal 100 may take the modification speed threshold as the gradation speed.
In step S710, the terminal 100 gradually moves the key point of the target object from the starting position of the key point on the image to the target position of the key point on the image according to the gradient speed, so as to show that the target object gradually changes from the starting form to the ending form.
According to the image processing method, the modification processing of the dynamic adaptation can be carried out on the target object according to the dynamic change of the object identification result based on the key points of the target object, and the modification processing effect of the target object is optimized.
In order to more clearly illustrate the image processing method provided by the disclosure, the method is applied to the explanation of the face image processing.
In general, when capturing a video or live broadcasting in real time through the terminal 100, the terminal 100 may acquire each frame of image, identify and mark key points of a face in the image, and perform liquefaction and shaping processing on the face through the key points, so as to achieve the shaping effects such as face thinning. When a person moves faster, suddenly turns around, suddenly goes out of the screen and comes in, or is blocked by an object, the corresponding frame images actually lose key points of the face, if the key points are processed in a conventional manner, the face thinning effect on the face is invalid, when the face appears in the image again, the face is suddenly thinned, the video records an instant of face changing jitter, and for the inside of a living broadcasting room, a viewer can directly see the face thinning mutation process, the face beautifying effect is poorer, and the user experience is also influenced.
The image processing method provided by the disclosure can perform face beautifying processing on the face in the image in the following manner, and the complete flow comprises the following steps:
the key points of the face on the image are from existence to nonexistence, and then from nonexistence to existence:
(1) First, there is a stage of a face on an image: face recognition and tracking: tracking the position of the face of each frame, and marking key points of the face for beauty treatment;
(2) Then, when the face is lost, the following steps are performed:
A. Recording the key point position when losing: recording the key point position of the face of the last frame when the face is lost, and keeping the face thinning effect of the last frame;
B. A gradation process comprising:
Firstly, defining a starting point of gradual change as a thin face effect of the last frame of a face, defining an ending point of gradual change as an original form of the face, namely, a form when the thin face shaping effect is completely removed, then, carrying out uniform gradual change recovery for 2 seconds from the starting point to the ending point, and realizing pixel movement from a key point of the face of the starting point to a key point of the face of the ending point by the gradual change effect. 2 seconds is the preset maximum time from the final face thinning effect to the original face shape, and the time can be set according to the actual scene requirement. Thus, the average rate of gradual change of the face is v=the change amplitude of the key points of the face, that is, the distance between the key point pixels of the face/the preset maximum time of 2 seconds can be obtained. The key points of the face change slowly from the starting point to the final point, so that the user cannot perceive the process, and the user generally appears the face in the time and begins to lean the face from nothing to nothing, and therefore, the phenomenon of shaking can not occur when the face is neglected or not in actual shooting.
(3) When key points of the human face are detected again:
Firstly, defining a starting point as the current state of a human face, wherein the state can be the human face without the face thinning effect or the middle state that the face thinning effect slowly disappears in the step B; acquiring the current position of a key point of a human face; the final point is defined as a final state of the face, the final state can be obtained by calculating the face thinning effect based on the current state of the face and the positions of key points, and a uniform gradual change with the speed v is carried out from the initial point to the final point.
Therefore, the gradual change of the thin face is started from the face losing to the face appearing on the picture, the continuous and slow and slight change process is adopted, and the obvious change is difficult to observe by the user, so that the phenomenon that the stuffing is exposed from the thin face effect from the non-existence to the existence or from the existence to the non-existence of key points of the face can be effectively avoided through the aesthetic gradual change logic processing. According to the scheme, the dynamic adaptation of the aesthetic effect can be achieved based on the dynamic change of the key points of the face, the most attractive gradual change effect is achieved by adjusting the gradual change time range, video and live broadcasting scenes are supported, the sudden change shake of the face can not occur when the face is lost again in shooting of a user, and the user experience of attractive products is greatly improved.
It should be understood that, although the steps in the flowcharts of fig. 2,4 to 7 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps of fig. 2, 4-7 may include a plurality of steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
Fig. 8 is a block diagram of an image processing apparatus according to an exemplary embodiment. Referring to fig. 8, the image processing apparatus 800 includes a start form acquisition module 801, a target form acquisition module 802, and an object presentation module 803.
A starting form acquiring module 801, configured to acquire, when an object recognition result for a target object in an image changes, a final modification form of the target object before the change occurs, as a starting modification form of the target object; the target object is an object of modification processing;
a target morphology acquisition module 802, configured to acquire a target variant morphology of the target object corresponding to the change;
the object display module 803 is configured to display the target object in a gradual manner with the start shape and the target shape as the start shape and the end shape of the gradual change, respectively.
In an exemplary embodiment, the object presentation module 803 includes:
The amplitude acquisition unit is used for obtaining the deformation amplitude of the target object according to the initial deformation form and the target deformation form; the variation amplitude is the variation amplitude of the variation morphology of the target object;
And the object display unit is used for displaying the target object according to the gradual change speed which is adaptive to the variation amplitude.
In an exemplary embodiment, the amplitude obtaining unit is further configured to obtain, according to the initial modification form and the target modification form, an initial position and a target position of a key point of the target object on the image respectively; the key points are pixel points on the image, which are used for carrying out modification treatment on the target object; the amplitude of the variation is derived from the distance between the starting position and the target position.
In an exemplary embodiment, the object displaying unit is further configured to obtain a preset modification duration threshold of the target object before displaying the target object according to the gradual change speed adapted to the modification amplitude; obtaining a variation speed threshold according to the variation amplitude and the variation duration threshold; a gradation speed not greater than the modification speed threshold value is acquired as a gradation speed adapted to the modification amplitude.
In an exemplary embodiment, the fade rate is a constant value that is no greater than the variant rate threshold.
In an exemplary embodiment, the object displaying unit is further configured to gradually move, according to the gradient speed, the key point of the target object from the starting position of the key point on the image to the target position of the key point on the image, so as to display that the target object is gradually changed from the starting shape to the ending shape; the starting position and the target position of the key point on the image are obtained according to the starting deformation form and the target deformation form respectively.
In an exemplary embodiment, the image processing apparatus 800 further includes:
a result acquisition unit configured to acquire an object recognition result for a target object in an image; the object recognition result includes: identifying a target object in the image or not identifying a target object in the image;
and a change judging unit configured to judge that the object recognition result changes when the object recognition result changes from recognizing the target object in the image to not recognizing the target object in the image, or when the object recognition result changes from not recognizing the target object in the image to recognizing the target object in the image.
In an exemplary embodiment, the target modification form is a final modification form of modification processing of the target object triggered by a change in the object recognition result.
In an exemplary embodiment, when the change of the object recognition result is from the recognition of the target object in the image to the non-recognition of the target object in the image, the target modification form is a form in which the target object is not modified; when the change of the object recognition result is changed from the fact that the target object is not recognized in the image to the fact that the target object is recognized in the image, the target modification form is a form in which the target object is subjected to complete modification processing.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 9 is an internal structural diagram of an electronic device, which is shown according to an exemplary embodiment. For example, electronic device 900 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, and the like.
Referring to fig. 9, an electronic device 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls overall operation of the electronic device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 902 may include one or more processors 920 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 902 can include one or more modules that facilitate interaction between the processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations at the electronic device 900. Examples of such data include instructions for any application or method operating on the electronic device 900, contact data, phonebook data, messages, pictures, video, and so forth. The memory 904 may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk.
The power supply component 906 provides power to the various components of the electronic device 900. Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 900.
The multimedia component 908 comprises a screen between the electronic device 900 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front-facing camera and/or a rear-facing camera. When the electronic device 900 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 904 or transmitted via the communication component 916. In some embodiments, the audio component 910 further includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 914 includes one or more sensors for providing status assessment of various aspects of the electronic device 900. For example, the sensor assembly 914 may detect an on/off state of the electronic device 900, a relative positioning of the components, such as a display and keypad of the electronic device 900, the sensor assembly 914 may also detect a change in position of the electronic device 900 or a component of the electronic device 900, the presence or absence of a user's contact with the electronic device 900, an orientation or acceleration/deceleration of the electronic device 900, and a change in temperature of the electronic device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communication between the electronic device 900 and other devices, either wired or wireless. The electronic device 900 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 916 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the image processing methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory 904 including instructions executable by the processor 920 of the electronic device 900 to perform the above-described image processing method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, there is also provided a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of a device reads and executes the computer program, causing the device to perform the image processing method as described in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. An image processing method, comprising:
When the object identification result of a target object in an image changes during live broadcasting, acquiring the final modification form of the target object before the change occurs as the initial modification form of the target object; the target object is an object of modification processing; the object recognition result includes: identifying a target object in the image or not identifying a target object in the image; the target object comprises a human face; the modification processing includes a face thinning processing; the initial modification form is an original form of the face which is not subjected to face thinning treatment, a final form of the face which is subjected to complete face thinning treatment or an intermediate form between the original form and the final form;
Acquiring a target modification form of the target object corresponding to the change; the target modification form is the final form of the face subjected to complete face thinning treatment or the original form of the face not subjected to face thinning treatment;
And respectively taking the initial deformation form and the target deformation form as a gradual change initial form and a gradual change end form, and displaying the target object in a gradual change mode.
2. The image processing method according to claim 1, wherein displaying the target object in a gradation manner with the start variation form and the target variation form as a start form and an end form of gradation, respectively, includes:
Obtaining the deformation amplitude of the target object according to the initial deformation form and the target deformation form; the variation amplitude is the variation amplitude of the variation morphology of the target object;
And displaying the target object according to the gradual change speed which is suitable for the variation amplitude.
3. The image processing method according to claim 2, wherein the obtaining the modification amplitude of the target object from the start modification form and the target modification form includes:
According to the initial modification form and the target modification form, respectively obtaining an initial position and a target position of a key point of the target object on the image; the key points are pixel points on the image, which are used for carrying out modification treatment on the target object;
the amplitude of the variation is derived from the distance between the starting position and the target position.
4. The image processing method according to claim 2, wherein before the target object is displayed at the gradation speed adapted to the magnitude of the variation, further comprising:
Acquiring a preset modification duration threshold of the target object;
obtaining a variation speed threshold according to the variation amplitude and the variation duration threshold;
and acquiring the gradual change speed which is not larger than the modification speed threshold value as the gradual change speed which is suitable for the modification amplitude.
5. The image processing method according to claim 4, wherein the gradation speed is a constant value not greater than the modification speed threshold.
6. The image processing method according to claim 2, wherein the displaying the target object at a gradation speed adapted to the magnitude of the variation includes:
Gradually moving the key point of the target object from the initial position of the key point on the image to the target position of the key point on the image according to the gradual change speed so as to show that the target object gradually changes from the initial shape to the end shape; the starting position and the target position of the key point on the image are obtained according to the starting deformation form and the target deformation form respectively.
7. The image processing method according to claim 1, wherein when the object recognition result for a target object in an image changes, a last modification form of the target object before the change occurs is acquired as a starting modification form of the target object, further comprising:
Acquiring an object recognition result aiming at a target object in the image;
And judging that the object recognition result is changed when the object recognition result is changed from the target object recognized in the image to the target object not recognized in the image or when the object recognition result is changed from the target object not recognized in the image to the target object recognized in the image.
8. The image processing method according to claim 1, wherein the target modification form is a final modification form of modification processing of the target object triggered by a change in the object recognition result.
9. The image processing method according to claim 8, wherein,
When the change of the object recognition result is changed from the recognition of the target object in the image to the recognition of the target object in the image, the target modification form is a form in which the target object is not subjected to modification treatment;
When the object recognition result changes from not recognizing the target object in the image to recognizing the target object in the image, the target modification form is a form in which the target object is subjected to complete modification processing.
10. An image processing apparatus, comprising:
The initial form acquisition module is used for acquiring the final deformation form of the target object before the change occurs when the object identification result of the target object in the image changes when live broadcasting is carried out, and the final deformation form is used as the initial deformation form of the target object; the target object is an object of modification processing; the object recognition result includes: identifying a target object in the image or not identifying a target object in the image; the target object comprises a human face; the modification processing includes a face thinning processing; the initial modification form is an original form of the face which is not subjected to face thinning treatment, a final form of the face which is subjected to complete face thinning treatment or an intermediate form between the original form and the final form;
The target form acquisition module is used for acquiring a target variant form of the target object corresponding to the change; the target modification form is the final form of the face subjected to complete face thinning treatment or the original form of the face not subjected to face thinning treatment;
and the object display module is used for displaying the target object in a gradual change mode by taking the initial deformation form and the target deformation form as a gradual change initial form and a gradual change ending form respectively.
11. The image processing apparatus of claim 10, wherein the object presentation module comprises:
An amplitude obtaining unit, configured to obtain a modification amplitude of the target object according to the initial modification form and the target modification form; the variation amplitude is the variation amplitude of the variation morphology of the target object;
and the object display unit is used for displaying the target object according to the gradual change speed which is adaptive to the variation amplitude.
12. The image processing apparatus according to claim 11, wherein the amplitude acquisition unit is further configured to obtain a start position and a target position of a key point of the target object on the image, respectively, based on the start modification form and the target modification form; the key points are pixel points on the image, which are used for carrying out modification treatment on the target object; the amplitude of the variation is derived from the distance between the starting position and the target position.
13. The image processing apparatus according to claim 11, wherein the object displaying unit is further configured to acquire a preset modification duration threshold of the target object before displaying the target object at the gradation speed that is adapted to the modification amplitude; obtaining a variation speed threshold according to the variation amplitude and the variation duration threshold; and acquiring the gradual change speed which is not larger than the modification speed threshold value as the gradual change speed which is suitable for the modification amplitude.
14. The image processing apparatus according to claim 13, wherein the gradation speed is a constant value not greater than the modification speed threshold.
15. The image processing apparatus according to claim 11, wherein the object presentation unit is further configured to gradually move a key point of the target object from a start position of the key point on the image to a target position of the key point on the image in accordance with the gradation speed to present the target object to be gradation from the start form to the end form; the starting position and the target position of the key point on the image are obtained according to the starting deformation form and the target deformation form respectively.
16. The image processing apparatus according to claim 10, characterized in that the image processing apparatus further comprises:
a result acquisition unit configured to acquire an object recognition result for a target object in the image;
And the change judging unit is used for judging that the object recognition result is changed when the object recognition result is changed from the object recognition in the image to the object recognition result in the image or when the object recognition result is changed from the object recognition result in the image to the object recognition result in the image.
17. The image processing apparatus according to claim 10, wherein the target modification form is a final modification form of modification processing of the target object triggered by a change in the object recognition result.
18. The image processing apparatus according to claim 17, wherein,
When the change of the object recognition result is changed from the recognition of the target object in the image to the recognition of the target object in the image, the target modification form is a form in which the target object is not subjected to modification treatment;
When the object recognition result changes from not recognizing the target object in the image to recognizing the target object in the image, the target modification form is a form in which the target object is subjected to complete modification processing.
19. An electronic device, comprising:
A processor;
A memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 9.
20. A storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the image processing method of any one of claims 1 to 9.
CN202010208723.6A 2020-03-23 2020-03-23 Image processing method, device, electronic equipment and storage medium Active CN111340690B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010208723.6A CN111340690B (en) 2020-03-23 2020-03-23 Image processing method, device, electronic equipment and storage medium
JP2022549506A JP2023514340A (en) 2020-03-23 2020-11-30 Image processing method, electronic device, and storage medium
PCT/CN2020/132994 WO2021189927A1 (en) 2020-03-23 2020-11-30 Image processing method and apparatus, electronic device, and storage medium
US17/820,026 US20220392253A1 (en) 2020-03-23 2022-08-16 Method for processing images, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010208723.6A CN111340690B (en) 2020-03-23 2020-03-23 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111340690A CN111340690A (en) 2020-06-26
CN111340690B true CN111340690B (en) 2024-05-14

Family

ID=71182531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010208723.6A Active CN111340690B (en) 2020-03-23 2020-03-23 Image processing method, device, electronic equipment and storage medium

Country Status (4)

Country Link
US (1) US20220392253A1 (en)
JP (1) JP2023514340A (en)
CN (1) CN111340690B (en)
WO (1) WO2021189927A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340690B (en) * 2020-03-23 2024-05-14 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN116503289B (en) * 2023-06-20 2024-01-09 北京天工异彩影视科技有限公司 Visual special effect application processing method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567740A (en) * 2010-12-14 2012-07-11 苏州大学 Image recognition method and system
CN110333924A (en) * 2019-06-12 2019-10-15 腾讯科技(深圳)有限公司 A kind of image morphing method of adjustment, device, equipment and storage medium
CN110580691A (en) * 2019-09-09 2019-12-17 京东方科技集团股份有限公司 dynamic processing method, device and equipment of image and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017004597A1 (en) * 2015-07-02 2017-01-05 Privowny, Inc. Systems and methods for media privacy
CN109040780A (en) * 2018-08-07 2018-12-18 北京优酷科技有限公司 A kind of method for processing video frequency and server
CN109523461A (en) * 2018-11-09 2019-03-26 北京达佳互联信息技术有限公司 Method, apparatus, terminal and the storage medium of displaying target image
CN111340690B (en) * 2020-03-23 2024-05-14 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567740A (en) * 2010-12-14 2012-07-11 苏州大学 Image recognition method and system
CN110333924A (en) * 2019-06-12 2019-10-15 腾讯科技(深圳)有限公司 A kind of image morphing method of adjustment, device, equipment and storage medium
CN110580691A (en) * 2019-09-09 2019-12-17 京东方科技集团股份有限公司 dynamic processing method, device and equipment of image and computer readable storage medium

Also Published As

Publication number Publication date
US20220392253A1 (en) 2022-12-08
JP2023514340A (en) 2023-04-05
WO2021189927A1 (en) 2021-09-30
CN111340690A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
US10565763B2 (en) Method and camera device for processing image
US9674395B2 (en) Methods and apparatuses for generating photograph
US20170103733A1 (en) Method and device for adjusting and displaying image
CN108470322B (en) Method and device for processing face image and readable storage medium
CN107341777B (en) Picture processing method and device
CN110677734B (en) Video synthesis method and device, electronic equipment and storage medium
CN107515669B (en) Display method and device
CN107463903B (en) Face key point positioning method and device
EP3113071A1 (en) Method and device for acquiring iris image
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN112330570A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111340691B (en) Image processing method, device, electronic equipment and storage medium
CN111340690B (en) Image processing method, device, electronic equipment and storage medium
CN109145878B (en) Image extraction method and device
CN112004020B (en) Image processing method, image processing device, electronic equipment and storage medium
CN107105311B (en) Live broadcasting method and device
CN105635573B (en) Camera visual angle regulating method and device
CN110502993B (en) Image processing method, image processing device, electronic equipment and storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN111832455A (en) Method, device, storage medium and electronic equipment for acquiring content image
CN112565625A (en) Video processing method, apparatus and medium
CN115914721A (en) Live broadcast picture processing method and device, electronic equipment and storage medium
CN116092147A (en) Video processing method, device, electronic equipment and storage medium
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant