CN113034351A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN113034351A
CN113034351A CN202110321265.1A CN202110321265A CN113034351A CN 113034351 A CN113034351 A CN 113034351A CN 202110321265 A CN202110321265 A CN 202110321265A CN 113034351 A CN113034351 A CN 113034351A
Authority
CN
China
Prior art keywords
image
edge
standard
compensation
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110321265.1A
Other languages
Chinese (zh)
Inventor
周梦涵
陈龙
袁瑞峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110321265.1A priority Critical patent/CN113034351A/en
Publication of CN113034351A publication Critical patent/CN113034351A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • G06T3/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/403Edge-driven scaling

Abstract

The embodiment of the application provides an image processing method and device, and if an object in a first image is located at the edge of the first image, a compensation image is obtained, wherein the compensation image at least comprises the first image and an edge prediction image. And then performing contraction deformation on an object contained in the compensation image, wherein the edge of the first image is not the edge of the compensation image, so that after the pixels at the edge of the first image are stretched towards the middle of the compensation image, the edge of the first image has more edge information (namely an edge prediction image) which can be filled, and the abnormal deformation of the edge position of the image is not caused, thereby improving the display effect of the image.

Description

Image processing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
With the continuous development of image processing applications, there are various techniques for performing contraction parallel processing on an object included in an image, for example, performing a face-slimming operation on a face included in the image, and performing a body-slimming operation on a body included in the image.
After the contraction deformation processing is performed on the object contained in the image, an abnormal deformation image is displayed at the edge position of the image, so that the image processing effect is poor.
Disclosure of Invention
In view of the above, the present application provides an image processing method and an image processing apparatus, so as to at least solve the problem that an abnormally deformed image is displayed at an edge position of an image after an object included in the image is subjected to a contraction deformation process.
In order to achieve the above purpose, the present application provides the following technical solutions:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
acquiring the position of an object in a first image when detecting that a contraction deformation operation is performed on the object contained in the first image;
obtaining a compensation image under the condition that the position represents that the edge of the object and the first image meets a first distance condition; the compensation image at least comprises the first image and an edge prediction image, and the edge of the object and the edge of the compensation image meet a second distance condition;
performing contraction deformation on the object contained in the compensation image to obtain a second image;
and controlling an image display interface to at least display an image of a target area in the second image, wherein the target area is an area including the object.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
a first acquisition module, configured to acquire a position of an object in a first image when detecting that a contraction deformation operation is performed on the object included in the first image;
the second acquisition module is used for acquiring a compensation image under the condition that the position represents that the edge of the object and the first image meets a first distance condition; the compensation image at least comprises the first image and an edge prediction image, and the edge of the object and the edge of the compensation image meet a second distance condition;
a shrinkage deformation module, configured to perform shrinkage deformation on the object included in the compensation image to obtain a second image;
and the control display module is used for controlling the image display interface to display at least the image of the target area in the second image, wherein the target area is an area including the object.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of image processing as described in the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method according to the first aspect.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product directly loadable into an internal memory of a computer, the memory is included in the electronic device and contains software codes, and the computer program can be loaded into and executed by the computer to implement the image processing method according to the first aspect.
As can be seen from the foregoing technical solutions, in an embodiment of the present application, an image processing method is provided, where if an object in a first image is located at an edge of the first image, a compensation image is obtained, and the compensation image is composed of at least the first image and an edge prediction image. And then, performing contraction deformation on an object contained in the compensation image, wherein the edge of the first image in the compensation image is not the edge of the compensation image, so that after the pixels of the edge of the first image are stretched towards the middle of the compensation image, the edge of the first image has more edge information (namely an edge prediction image) which can be filled, and the abnormal deformation of the edge position of the image is not caused, thereby improving the display effect of the image.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a diagram illustrating a process of performing a shrinkage deformation operation on an object included in an image according to the related art;
fig. 2 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of possible situations of an object included in a first image according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a relationship between a first image and a predicted edge image according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a process of performing shrinkage deformation on an object included in a compensated image according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of correspondence between standard key points of a face image before shrinkage deformation and a face image after shrinkage deformation according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a relationship between a plurality of standard key points constituting the contour and the first image according to an embodiment of the present application;
fig. 8 is a process diagram for obtaining an edge prediction image from a third image according to an embodiment of the present application;
fig. 9 is a block diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 10 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image processing method and device, and before introducing the technical scheme provided by the embodiment of the application, the related technology related to the application is explained.
In the related art, if an object is located at an edge of an image during a process of performing a contraction deformation operation on the object included in the image, when the contraction deformation processing is performed on the object, since the image edge pixel information is less, and after the image edge pixels are stretched to the middle of the image, the image edge pixels have no more edge information to fill, in this case, the partial image may be filled with a local area image close to the edge object, or the partial image may be filled with a preset image.
Fig. 1 is a schematic diagram illustrating a process of performing a shrinkage deformation operation on an object included in an image in the related art.
For example, the electronic device performing the pinch-out operation on the object included in the image may be any electronic product that can interact with a user through one or more ways such as a keyboard, a touch PAD, a touch screen, a remote controller, a voice interaction device, or a handwriting device, for example, a mobile phone, a laptop, a tablet computer, a palmtop computer, a personal computer, a wearable device, a smart television, a PAD, and so on.
Fig. 1 illustrates an example of an electronic device as a mobile phone.
For example, the electronic device 11 may capture an image of the user 12 through a camera configured thereon to obtain an original image 13, and perform a shrinking deformation process on the face 10 of the user 12 based on the original image 13, for example, the face of the user 12 may be shrunk and deformed based on a standard key point of a human face in the captured image, so that the electronic device 11 displays a beauty-slimming image on the display interface 16 of the screen.
Specifically, the electronic device 11 captures an original image 13 of the user 12, and performs a shrinking deformation process on the original image 13, and the dashed box indicated by the arrow illustrates an image processing process of the electronic device 11 on the original image 13, which can be performed by an application program with a beauty function installed on the electronic device 11. The original image 13 acquired by the electronic device 11 may include a face image 10 corresponding to a face of the user 12, where the face image 10 may be used as an object, and it can be understood that if the contraction deformation processing needs to be performed on other parts of the user 12, such as an arm and an eye, the arm and the eye may be used as the object accordingly. Then, the electronic device 11 may perform a contraction and deformation process on the face image 10 in the original image 13 by using an application program with a beautifying function, where the contraction and deformation process is specifically a face thinning process, so as to obtain a face thinning image 14 or a face thinning image 15, where the face thinning image 14 or the face thinning image 15 includes a face thinned face image 141 or a face image 151 corresponding to a face of the user 12. The electronic device 11 may then present the face-thinning image 14 or the face-thinning image 15 on the display interface 16, so that the user 12 can see the face image after face thinning on the display interface 16.
However, because the face image included in the original image 13 is located at the edge of the original image 13, when the face image is shrunk and deformed, the face image located at the edge needs to be stretched upwards, because the edge pixel information of the original image 13 is less, after the face image is stretched upwards, the edge pixels of the image edge have no more edge information to be filled, and the thin-face image 14 is formed by filling a preset image (taking the preset image as a black image as an example) at the edge position of the thin-face image 14; the thin-face image 15 fills the chin located at the edge position of the original image 13 in the edge position of the thin-face image 15, so the chin of the thin-face image 15 is longer.
In summary, in the related art, when performing the contraction deformation processing on the object in the image, if the object is located at the edge of the image, the edge of the image after the contraction deformation is abnormally deformed, so that the display effect of the image is poor.
Based on this, embodiments of the present application provide an image processing method in which a compensation image is obtained if an object in an original image (in the embodiments, the original image is referred to as a first image) is located at an edge of the original image, the compensation image being composed of at least the first image and an edge prediction image. The object is not located at the edge of the compensated image; and then, the object contained in the compensation image is subjected to shrinkage deformation, and the object is not positioned at the edge position of the compensation image, so that the edge position of the image is not subjected to abnormal deformation after the object contained in the compensation image is subjected to shrinkage deformation, and the display effect of the image is improved.
The following describes an image processing method provided in an embodiment of the present application.
As shown in fig. 2, a flowchart of an image processing method provided in an embodiment of the present application includes the following steps S21 to S24.
Step S21: in the case where it is detected that a contraction deformation operation is performed with respect to an object included in a first image, a position of the object in the first image is acquired.
For example, the first image may be an original image acquired by the electronic device in real time or an image stored by the electronic device.
For example, the image processing method provided by the embodiment of the present application may be a processing method for an image containing an object in a video, so as to avoid the situation that an edge of a certain frame of video image is abnormally deformed in a live broadcast or video call process.
For example, the image processing method provided by the embodiment of the present application may be a processing method for an image including an object (the image is not a certain image included in a video).
Illustratively, the shrink warping operation may be performed on an object included in the first image by a client running in the electronic device.
Illustratively, the client may be an application client or a web page version client.
For example, the object may be any one of objects that need to be reduced, such as a human face, an arm, a leg, a nose, and a mouth, or may be a local region of the object that needs to be reduced. The following describes an object included in the first image with reference to an example.
As shown in fig. 1, if the first image is the original image 13, the object included in the first image is the whole human face.
Fig. 3 is a schematic diagram of possible situations of an object included in a first image according to an embodiment of the present disclosure.
As shown in fig. 3, if the first image is the image 31, the object included in the first image is a half of the human face indicated by a broken line in fig. 31.
In summary, the object included in the first image referred to in the embodiments of the present application refers to an object that is located in the first image and needs to be shrunk and deformed.
Step S22: obtaining a compensated image if the position characterizes an edge of the object from the first image that satisfies a first distance condition.
The compensation image is composed of at least the first image and an edge prediction image, and the edge of the object and the edge of the compensation image satisfy a second distance condition.
Illustratively, the size of the compensation image is equal to the sum of the size of the first image and the size of the edge prediction image. Illustratively, the size of the compensation image is less than or equal to the sum of the size of the first image and the size of the edge prediction image.
The size of the compensated image may be determined based on actual circumstances and will not be described here.
The first distance condition is explained below.
Since the standard object corresponding to the object is used, the standard object corresponding to the object is described first.
For example, the object included in the first image may be a local region of a certain object, for example, the object included in the image 31 shown in fig. 3 is a local region of a human face, and the object corresponding to the "object" in this embodiment of the present application is referred to as a standard object. If the object included in the image 31 is a half face, the standard object corresponding to the object is a whole face; if the original image 13 contains an object that is the whole face, the standard object corresponding to the object is the object itself.
Illustratively, the first distance condition includes: the standard object corresponding to the object is fractured by the edge of the first image. As shown in fig. 3, the entire face is split by the edges of the image 31.
Illustratively, the first distance condition includes: the standard object corresponding to the object is not cut by the edge of the first image (such as the face included in the original image shown in fig. 1), and the standard object is located at the edge of the first image.
The second distance condition is explained below.
The second distance condition includes at least: objects in the compensated image are not located at the edges of the compensated image.
Because the object contained in the compensation image is not positioned at the edge of the compensation image, the shrinkage deformation processing on the object in the compensation image does not cause the abnormal shrinkage deformation of the edge of the image.
The following describes an edge prediction image.
If the object and the target edge of the first image satisfy the first distance condition, the edge prediction image includes an image located outside the target edge of the first image.
For example, the first image may include a plurality of edges and the target edge includes one or more edges.
Illustratively, the first image includes a number of edges related to the shape of the first image, e.g., the first image includes a number of edges equal to the number of edges included in the shape of the first image, e.g., the first image is a rectangle, and the first image includes a number of edges of 4.
For example, the shape of the first image may be a triangle, a quadrangle, a pentagon, …, and the shape of the first image is not described in the embodiments of the present application.
The first image is described below as a rectangle. The first image includes 4 edges, which are a first edge, a second edge, a third edge, and a fourth edge, respectively. Illustratively, the first image includes an object that satisfies a first distance condition from at least one edge of the first image.
Fig. 4 is a schematic diagram illustrating a relationship between a first image and a predicted edge image according to an embodiment of the present application.
In fig. 4, if the first image is the image 31, the object included in the first image and the first edge (i.e., the target edge is the first edge) satisfy the first distance condition, and the edge prediction image may be the edge prediction image 41 shown in fig. 4.
In fig. 4, if the first image is the original image 13, the object and the fourth edge (i.e., the target edge is the fourth edge) included in the first image satisfy the first distance condition, and the edge prediction image may be the edge prediction image 42 shown in fig. 4.
The compensation image comprising the image 31 and the edge prediction image 41 is the compensation image 43 shown in fig. 4. The compensated image comprising the original image 13 and the edge prediction image 42 is the compensated image 44 shown in fig. 4.
The dashed lines in the compensation images 43 and 44, which may not be present in practical applications, characterize the compensation images as including the first image and the edge prediction image.
Step S23: and performing contraction deformation on the object contained in the compensation image to obtain a second image.
The compensation image contains the same object as the first image.
Fig. 5 is a schematic diagram of a process of performing shrinkage deformation on an object included in a compensation image according to an embodiment of the present application.
Fig. 5 illustrates an example of the compensated image 44.
The face contained in the compensation image 44 is shrunk and deformed to obtain a second image 51.
In the compensation image 44, a first region where the first image is located above the dotted line, and a second region where the edge prediction image is located below the dotted line. The upper dotted line of the second image 51 corresponds to the first area and the west dotted line corresponds to the second area.
As can be seen from the second image 51 above the dotted line, the neck portion under the face is stretched up. Since the edge of the first image in the compensation image is not the edge of the compensation image, after the pixels at the edge of the first image are stretched towards the middle of the compensation image, the edge of the first image can be filled with more edge information (namely, an edge prediction image), so that abnormal deformation can not occur.
Step S24: and controlling an image display interface to at least display an image of a target area in the second image, wherein the target area is an area including the object.
Illustratively, the target region is a local region or a whole region of a region where the second image is located.
Illustratively, the size of the compensation image and the size of the second image are both the sum of the sizes of the first image and the edge prediction image. The compensation image includes two regions, which are a first region where the first image is located and a second region where the edge prediction image is located, and the second image corresponds to the compensation image, so the second image also includes the first region and the second region, and exemplarily, the target region is the first region.
Illustratively, if the size of the compensation image is larger than or smaller than the sum of the sizes of the first image and the edge prediction image, the compensation image includes a first region corresponding to the first image and a second region corresponding to the edge prediction image. The image content in the first area is the image content of the first image, and the image content in the second area is the image content of the edge prediction image. Since the second image corresponds to the compensated image, the second image also includes a first region and a second region, illustratively, the target region is the first region.
For example, the image in the target area may be enlarged or reduced to the size of the first image and then displayed.
The embodiment of the application provides an image processing method, and if an object in a first image is located at the edge of the first image, a compensation image is obtained, and the compensation image at least comprises the first image and an edge prediction image. And then, performing contraction deformation on an object contained in the compensation image, wherein the edge of the first image in the compensation image is not the edge of the compensation image, so that after the pixels of the edge of the first image are stretched towards the middle of the compensation image, the edge of the first image has more edge information (namely an edge prediction image) which can be filled, and the abnormal deformation of the edge position of the image is not caused, thereby improving the display effect of the image.
In an alternative implementation manner, the image processing method provided by the embodiment of the present application includes a step of determining that the position represents that the object and the edge of the first image satisfy the first distance condition, which is implemented in various ways, and the embodiment of the present application provides, but is not limited to, a method including step a1 to step a 2.
Step A1: determining the outline of a standard object corresponding to an object contained in the first image, wherein the object contained in the first image is a local area or a whole area of the standard object.
The relationship between the object and the standard object can be referred to the above description, and is not described herein again.
For example, if the standard object corresponding to the object is located in the first image, the contour of the standard object is the contour of the object, and if the standard object corresponding to the object is fractured by the edge of the first image, the contour of the standard object includes the contour of the object.
Step A2: and under the condition that the contour meets a preset condition, determining that the position represents that the object and the edge of the first image meet a first distance condition.
The preset conditions include: a part of the contour is located outside the first image, and/or the contour is located entirely in the first image, and a distance of the part of the contour from a boundary of the first image is smaller than or equal to a first threshold.
Illustratively, the part of the contour is located outside said first image, i.e. the standard object is fractured by the edges of the first image.
Illustratively, the first threshold is determined based on a number of stretched pixels that the shrink-warping operation includes in correspondence with the degree of shrink-warping parameter. Illustratively, the first threshold is less than or equal to the number of stretched pixels.
For example, the user may perform a shrinkage deformation operation on the object included in the first image, and during the performing of the shrinkage deformation operation, a shrinkage deformation degree parameter may be selected.
For example, the electronic device may display the contraction deformation bar, and the user may drag the contraction deformation key on the contraction deformation bar, where the position of the contraction deformation key on the contraction deformation bar is different, and the contraction deformation degree of the object is different. Illustratively, the shrinkage deformation degree parameter includes the number of stretched pixels.
The number of stretched pixels is the number of pixels that need to stretch the boundary of the object toward the center of the image in the process of performing the contraction deformation processing on the object.
For example, the first threshold may be a fixed value, for example, the fixed value may be the maximum value of the number of stretching pixels included in the shrinkage deformation operation corresponding to the shrinkage deformation degree parameter.
The first threshold may be determined based on actual conditions, and is not limited herein.
Illustratively, the second distance condition includes a minimum distance of the object from an edge of the compensated image being greater than or equal to the number of stretched pixels.
In an alternative implementation manner, there are various implementation manners of step a1, and the embodiment of the present application provides but is not limited to the following manner, where the method includes the steps of: and acquiring a plurality of standard key points of the standard object corresponding to the object contained in the first image to obtain the contour consisting of the plurality of standard key points.
In an alternative implementation manner, there are various implementations of obtaining a plurality of standard key points of the standard object corresponding to the object included in the first image, and the embodiment of the present application provides, but is not limited to, the following method, which includes step B1 to step B2.
Step B1: and acquiring a standard object model corresponding to the target object type to which the object belongs.
The standard object model comprises a plurality of standard keypoints on the standard object belonging to the target object type, wherein the plurality of standard keypoints constitute an outline of the standard object.
Exemplary object types include, but are not limited to: any type of retractable object such as a human face, a nose, a mouth, arms, legs, and the like.
Step B2: marking a plurality of standard key points of the standard object corresponding to the object contained in the first image based on the standard object model.
Each object type can respectively correspond to at least one standard object model, the standard object model can comprise a standard key point set of a corresponding standard object, and the standard object can be marked through a plurality of standard key points contained in the standard key point set.
Illustratively, the standard key points in the standard object before the shrinkage deformation and the standard key points in the standard object after the shrinkage deformation are in one-to-one correspondence. In the process of the shrinkage deformation processing, the standard key points in the standard object before shrinkage deformation are moved to obtain the standard key points of the standard object after shrinkage deformation, so as to obtain the object after shrinkage deformation. The following description will take the object type as a face as an example.
As shown in fig. 6, the standard key points of the pre-shrinkage-deformation face image 61 and the post-shrinkage-deformation face image 62 are in one-to-one correspondence, and each standard key point is set with a corresponding serial number for example. If the face image 61 before the contraction deformation comprises 20 standard key points, the serial numbers of the 20 standard key points are A1, A2, … and A16 in sequence; the face image 62 after the contraction deformation comprises 20 standard key points, wherein the serial numbers of the 20 standard key points are B1, B2, … and B16 in sequence; the criterion key point Ai corresponds to the criterion key point Bi, i is 1, 2, …, 20. In the process of the shrinkage deformation processing, the electronic device may obtain the standard key points Bi of the face image 62 after the shrinkage deformation by moving the positions of the standard key points Ai of the face image 61 before the shrinkage deformation.
In an alternative implementation, the method for determining whether the contour of the standard object corresponding to the object included in the first image is located in the first image includes the following steps C1 and C2.
Step C1: determining that a part of the contour is located outside the first image in a case where at least one of the standard keypoints constituting the contour is located outside the first image.
Step C2: determining that the contour is entirely located on the first image in a case where a plurality of standard keypoints constituting the contour are all located on the first image.
Fig. 7 is a schematic diagram illustrating a relationship between a plurality of standard key points constituting the contour and the first image according to the embodiment of the present application.
As shown in the left side of fig. 7, the first image 71 (the rectangular internal image is the first image) includes an object which is a partial face. And 8 standard key points in the plurality of standard key points of the standard object corresponding to the local human face are not positioned in the first image, namely positioned on the outer side of the first image, and the local part of the contour is determined to be positioned on the outer side of the first image.
As shown in the first image 72 (the rectangular internal image is the first image) on the right side of fig. 7, the first image includes an object that is the entire face of a person. And a plurality of standard key points of the standard object corresponding to the whole face are all positioned in the first image, and the contour is determined to be completely positioned in the first image at the moment.
In an alternative implementation manner, there are various implementation manners of step S22, and the embodiment of the present application provides, but is not limited to, the following two.
The implementation of the first step S22 includes the following steps D11 to D13.
Step D11: and acquiring a third image, wherein the third image comprises the edge prediction image.
For example, if the first image is an image in a video, in the process of capturing the video, the object is moving, that is, although the object in the first image and the edge of the first image satisfy the first distance condition, a third image exists in the video, and the object included in the third image and the edge of the third image do not satisfy the first distance condition. I.e. the third image contains said object not at the edge of the third image. An edge prediction image can be obtained from the third image.
For example, a user may capture a plurality of images containing the object through an electronic device, and a third image may exist in the plurality of images containing the object.
Step D12: and acquiring the edge prediction image from the third image under the condition that the position indicates that the edge of the object and the first image meets a first distance condition.
Illustratively, step D12 may include: determining a target boundary of the object, wherein the target boundary and the edge of the first image meet a first distance condition; obtaining an image located outside the target boundary of the object from the third image.
As shown in fig. 8, a process diagram for obtaining an edge prediction image from a third image according to an embodiment of the present application is provided.
Fig. 8 illustrates an example in which the first image is the original image 13. The first image contains the object boundaries of the object as indicated by the dashed lines in the original image 13 in fig. 8.
Assuming that the third image is the image 81, the predicted edge image may be a portion of the image 81 outlined by a dashed box.
Step D13: and splicing the edge prediction image and the first image to obtain the compensation image.
Illustratively, step D13 may include: determining a target edge satisfying a first distance condition with the object from the first image; stitching the edge prediction images at the target edge of the first image.
For example, if the electronic device does not store the third image, the edge prediction image may be obtained in the second implementation manner.
The second implementation of step S22 includes the following steps D21 to D22.
Step D21: and repairing the image to be repaired at the edge based on the first image to obtain the edge prediction image.
Image Inpainting (Inpainting) refers to the process of missing or corrupted portions of a reconstructed image. I.e. replacing lost, corrupted image data with a complex algorithm.
In the embodiment of the application, the image to be repaired at the edge is taken as a lost or damaged part in the compensation image to be repaired. The image to be repaired and compensated at least comprises a first image and an image to be repaired at the edge. The process of repairing the image to be repaired of the compensation image is a process of repairing the image to be repaired at the edge based on the first image.
Illustratively, step D21 includes: determining a target edge satisfying a first distance condition with the object from the first image; and repairing an outer image positioned at the target edge of the first image based on the first image to obtain an edge prediction image. The image to be edge-repaired is an outer image located at the target edge of the first image.
Step D22: and splicing the edge prediction image and the first image to obtain the compensation image.
The method is described in detail in the embodiments disclosed in the present application, and the method of the present application can be implemented by using various types of apparatuses, so that various apparatuses are also disclosed in the present application, and specific embodiments are given below for detailed description.
As shown in fig. 9, a block diagram of an image processing apparatus according to an embodiment of the present application includes: a first acquisition module 91, a second acquisition module 92, a shrinkage deformation module 93, and a control display module 94, wherein:
a first obtaining module 91, configured to, in a case where it is detected that a contraction deformation operation is performed on an object included in a first image, obtain a position of the object in the first image;
a second obtaining module 92, configured to obtain a compensated image when the edge of the position representation object and the first image satisfies a first distance condition; the compensation image at least comprises the first image and an edge prediction image, and the edge of the object and the edge of the compensation image meet a second distance condition;
a shrinkage deformation module 93, configured to perform shrinkage deformation on the object included in the compensation image to obtain a second image;
and a control display module 94, configured to control a display interface to display at least an image of a target area in the second image, where the target area is an area including the object.
In an optional implementation manner, the method further includes:
the first determining module is used for determining the outline of a standard object corresponding to an object contained in the first image, wherein the object contained in the first image is a local area or a whole area of the standard object;
the second determining module is used for determining that the position represents that the object and the edge of the first image meet a first distance condition under the condition that the contour meets a preset condition; the preset conditions include: a part of the contour is located outside the first image, and/or the contour is located entirely in the first image, and a distance of the part of the contour from a boundary of the first image is smaller than or equal to a first threshold.
In an optional implementation, the first determining module includes:
a first obtaining unit, configured to obtain a plurality of standard key points of the standard object corresponding to an object included in the first image, so as to obtain the contour formed by the plurality of standard key points.
In an optional implementation manner, the first obtaining unit includes:
the system comprises a first obtaining subunit, a second obtaining subunit, a third obtaining subunit, a fourth obtaining subunit, a fifth obtaining subunit, a sixth obtaining subunit, a fifth obtaining subunit, a sixth obtaining subunit;
and the marking subunit is used for marking a plurality of standard key points of the standard object corresponding to the object contained in the first image based on the standard object model.
In an optional implementation manner, the method further includes:
a third determining module, configured to determine that a part of the contour is located outside the first image if at least one of the plurality of standard keypoints constituting the contour is located outside the first image;
a fourth determining module, configured to determine that the contour is completely located in the first image if a plurality of standard keypoints that constitute the contour are all located in the first image.
In an optional implementation manner, the second obtaining module includes:
a second obtaining unit, configured to obtain a third image, where the third image includes the edge prediction image;
a fourth obtaining unit, configured to obtain the edge prediction image from the third image when the edge between the position representing the object and the first image satisfies a first distance condition;
and the first splicing unit is used for splicing the edge prediction image and the first image to obtain the compensation image.
In an alternative implementation, the shrinkage deformation operation corresponds to a shrinkage deformation degree parameter, and the shrinkage deformation operation parameter includes the number of stretching pixels; the second distance condition includes a minimum distance of the object from an edge of the compensated image being greater than or equal to the stretched number of pixels.
In an optional implementation manner, the second obtaining module includes:
a fifth obtaining unit, configured to repair an image to be edge repaired based on the first image to obtain the edge prediction image;
and the second splicing unit is used for splicing the edge prediction image and the first image to obtain the compensation image.
In an optional implementation manner, the target area is a local area or a whole area of an area where the second image is located.
FIG. 10 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
As shown in fig. 10, electronic devices include, but are not limited to: a processor 1001, a memory 1002, a network interface 1003, an I/O controller 1004, and a communication bus 1005.
It should be noted that, as those skilled in the art will appreciate, the structure of the electronic device shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in fig. 10, or may combine some components, or may be arranged in different components.
The following specifically describes each constituent component of the electronic device 11 with reference to fig. 9:
the processor 1001 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 1002 and calling data stored in the memory 1002, thereby performing overall monitoring of the electronic device. Processor 1001 may include one or more processing units; optionally, the processor 1001 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1001.
Processor 1001 may be a Central Processing Unit (CPU), or an application Specific Integrated circuit (asic), or one or more Integrated circuits configured to implement embodiments of the present invention, etc.;
the Memory 1002 may include Memory, such as Random-Access Memory (RAM) 1021 and Read-Only Memory (ROM) 1022, and may also include a mass storage device 1023, such as at least 1 disk storage. Of course, the memory 1002 may also include hardware needed for other services.
The memory 1002 is used for storing the executable instructions of the processor 1001. The processor 1001 is configured to execute any one of the steps in the image processing method embodiment applied to the electronic device.
A wired or wireless network connection 903 is configured to connect the electronic device 11 to a network.
The processor 1001, the memory 1002, the network interface 1003, and the I/O controller 1004 may be connected to each other via a communication bus 1005, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
In an exemplary embodiment, the electronic device 11 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described image processing methods.
In an exemplary embodiment, there is also provided a computer storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method as described above.
In an exemplary embodiment, a computer program product is also provided, which is directly loadable into an internal memory of a computer, the memory being included in the electronic device and containing software codes, and the computer program being loadable and executable via the computer and being capable of implementing the image processing method described above.
Note that the features described in the embodiments in the present specification may be replaced with or combined with each other. For the device or system type embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image processing method comprising:
acquiring the position of an object in a first image when detecting that a contraction deformation operation is performed on the object contained in the first image;
obtaining a compensation image under the condition that the position represents that the edge of the object and the first image meets a first distance condition; the compensation image at least comprises the first image and an edge prediction image, and the edge of the object and the edge of the compensation image meet a second distance condition;
performing contraction deformation on the object contained in the compensation image to obtain a second image;
and controlling an image display interface to at least display an image of a target area in the second image, wherein the target area is an area including the object.
2. The image processing method of claim 1, the step of determining that the location characterizes the object as satisfying a first distance condition from an edge of the first image comprising:
determining the outline of a standard object corresponding to an object contained in the first image, wherein the object contained in the first image is a local area or a whole area of the standard object;
determining that the position represents that the object and the edge of the first image meet a first distance condition under the condition that the contour meets a preset condition; the preset conditions include: a part of the contour is located outside the first image, and/or the contour is located entirely in the first image, and a distance of the part of the contour from a boundary of the first image is smaller than or equal to a first threshold.
3. The image processing method according to claim 2, wherein the step of determining the contour of the standard object corresponding to the object included in the first image comprises:
and acquiring a plurality of standard key points of the standard object corresponding to the object contained in the first image to obtain the contour consisting of the plurality of standard key points.
4. The image processing method according to claim 3, wherein the step of obtaining a plurality of standard key points of a standard object corresponding to the object included in the first image comprises:
acquiring a standard object model corresponding to a target object type to which the object belongs, wherein the standard object model comprises a plurality of standard key points on the standard object belonging to the target object type, and the plurality of standard key points form the outline of the standard object;
marking a plurality of standard key points of the standard object corresponding to the object contained in the first image based on the standard object model.
5. The image processing method according to claim 3 or 4, further comprising:
determining that a part of the contour is located outside the first image in a case where at least one of the standard keypoints constituting the contour is located outside the first image;
determining that the contour is entirely located on the first image in a case where a plurality of standard keypoints constituting the contour are all located on the first image.
6. The image processing method according to any one of claims 1 to 4, wherein in a case where the edge of the position characterizing the object and the first image satisfies a first distance condition, the obtaining a compensated image step includes:
acquiring a third image, wherein the third image comprises the edge prediction image;
under the condition that the position indicates that the edge of the object and the first image meets a first distance condition, acquiring the edge prediction image from the third image;
and splicing the edge prediction image and the first image to obtain the compensation image.
7. The image processing method according to claim 6, wherein the shrinkage deformation operation corresponds to a shrinkage deformation degree parameter, and the shrinkage deformation operation parameter comprises a stretching pixel number; the second distance condition includes a minimum distance of the object from an edge of the compensated image being greater than or equal to the stretched number of pixels.
8. The image processing method according to any one of claims 1 to 4, wherein in a case where the edge of the position characterizing the object and the first image satisfies a first distance condition, the obtaining a compensated image step includes:
repairing an image to be repaired at the edge based on the first image to obtain the edge prediction image;
and splicing the edge prediction image and the first image to obtain the compensation image.
9. The image processing method according to any one of claims 1 to 4, wherein the target region is a partial region or a whole region of a region where the second image is located.
10. An image processing apparatus comprising:
a first acquisition module, configured to acquire a position of an object in a first image when detecting that a contraction deformation operation is performed on the object included in the first image;
the second acquisition module is used for acquiring a compensation image under the condition that the position represents that the edge of the object and the first image meets a first distance condition; the compensation image at least comprises the first image and an edge prediction image, and the edge of the object and the edge of the compensation image meet a second distance condition;
a shrinkage deformation module, configured to perform shrinkage deformation on the object included in the compensation image to obtain a second image;
and the control display module is used for controlling the image display interface to display at least the image of the target area in the second image, wherein the target area is an area including the object.
CN202110321265.1A 2021-03-25 2021-03-25 Image processing method and device Pending CN113034351A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110321265.1A CN113034351A (en) 2021-03-25 2021-03-25 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110321265.1A CN113034351A (en) 2021-03-25 2021-03-25 Image processing method and device

Publications (1)

Publication Number Publication Date
CN113034351A true CN113034351A (en) 2021-06-25

Family

ID=76473780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110321265.1A Pending CN113034351A (en) 2021-03-25 2021-03-25 Image processing method and device

Country Status (1)

Country Link
CN (1) CN113034351A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114185628A (en) * 2021-11-19 2022-03-15 北京奇艺世纪科技有限公司 Picture adjusting method, device and equipment of iOS system and computer readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678251A (en) * 2015-12-31 2016-06-15 Tcl海外电子(惠州)有限公司 Face image processing method and device
US20180041706A1 (en) * 2016-08-04 2018-02-08 Canon Kabushiki Kaisha Image processing apparatus, optical apparatus, and image processing method
CN110288652A (en) * 2019-06-28 2019-09-27 青岛海信电器股份有限公司 Image processing method and equipment
CN110751602A (en) * 2019-09-20 2020-02-04 北京迈格威科技有限公司 Conformal distortion correction method and device based on face detection
CN111340691A (en) * 2020-03-27 2020-06-26 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111882520A (en) * 2020-06-16 2020-11-03 歌尔股份有限公司 Screen defect detection method and device and head-mounted display equipment
CN112257594A (en) * 2020-10-22 2021-01-22 广州繁星互娱信息科技有限公司 Multimedia data display method and device, computer equipment and storage medium
CN112529784A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Image distortion correction method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678251A (en) * 2015-12-31 2016-06-15 Tcl海外电子(惠州)有限公司 Face image processing method and device
US20180041706A1 (en) * 2016-08-04 2018-02-08 Canon Kabushiki Kaisha Image processing apparatus, optical apparatus, and image processing method
CN110288652A (en) * 2019-06-28 2019-09-27 青岛海信电器股份有限公司 Image processing method and equipment
CN112529784A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Image distortion correction method and device
CN110751602A (en) * 2019-09-20 2020-02-04 北京迈格威科技有限公司 Conformal distortion correction method and device based on face detection
CN111340691A (en) * 2020-03-27 2020-06-26 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111882520A (en) * 2020-06-16 2020-11-03 歌尔股份有限公司 Screen defect detection method and device and head-mounted display equipment
CN112257594A (en) * 2020-10-22 2021-01-22 广州繁星互娱信息科技有限公司 Multimedia data display method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114185628A (en) * 2021-11-19 2022-03-15 北京奇艺世纪科技有限公司 Picture adjusting method, device and equipment of iOS system and computer readable medium
CN114185628B (en) * 2021-11-19 2024-04-12 北京奇艺世纪科技有限公司 Picture adjustment method, device and equipment of iOS (integrated operation system) and computer readable medium

Similar Documents

Publication Publication Date Title
CN109389562B (en) Image restoration method and device
CN107507216B (en) Method and device for replacing local area in image and storage medium
CN107491755B (en) Method and device for gesture recognition
CN107564080B (en) Face image replacement system
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN111080571B (en) Camera shielding state detection method, device, terminal and storage medium
CN107610059B (en) Image processing method and mobile terminal
CN111127342B (en) Image processing method, device, storage medium and terminal equipment
CN109377515A (en) A kind of moving target detecting method and system based on improvement ViBe algorithm
CN115809982B (en) Method, device and system for detecting cell crush injury
CN110991310A (en) Portrait detection method, portrait detection device, electronic equipment and computer readable medium
CN113034351A (en) Image processing method and device
CN108810509B (en) Image color correction method and device
WO2019223066A1 (en) Global enhancement method, device and equipment for iris image, and storage medium
CN107770487B (en) Feature extraction and optimization method, system and terminal equipment
CN112034981A (en) Display terminal control method, display terminal, and computer-readable storage medium
CN111508045B (en) Picture synthesis method and device
CN113763233A (en) Image processing method, server and photographing device
CN115731591A (en) Method, device and equipment for detecting makeup progress and storage medium
CN113129227A (en) Image processing method, image processing device, computer equipment and storage medium
CN112200774A (en) Image recognition apparatus
CN112801932A (en) Image display method, image display device, electronic equipment and storage medium
CN111767916A (en) Image detection method, device, equipment and storage medium
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113869390B (en) Information processing method and device for multi-view three-dimensional reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination