WO2019237747A1 - 图像裁剪方法、装置、电子设备及计算机可读存储介质 - Google Patents

图像裁剪方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2019237747A1
WO2019237747A1 PCT/CN2019/073073 CN2019073073W WO2019237747A1 WO 2019237747 A1 WO2019237747 A1 WO 2019237747A1 CN 2019073073 W CN2019073073 W CN 2019073073W WO 2019237747 A1 WO2019237747 A1 WO 2019237747A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
key point
weight value
pupil
key
Prior art date
Application number
PCT/CN2019/073073
Other languages
English (en)
French (fr)
Inventor
刘志超
赖锦锋
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2019237747A1 publication Critical patent/WO2019237747A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an image cropping method, device, electronic device, and computer-readable storage medium.
  • the application range of smart terminals has been widely improved, such as listening to music, playing games, chatting on the Internet, and taking photos through the smart terminal.
  • the camera pixels have reached more than 10 million pixels, which has higher resolution and is comparable to that of professional cameras.
  • APP application
  • the beauty function of the smart terminal usually includes skin beauty adjustment effects such as skin tone adjustment, microdermabrasion, big eyes, and thin face, which can perform the same degree of beauty treatment on all faces that have been identified in the image.
  • the current beauty function generally uses the image sensor of the terminal device to collect the user's face information for processing, such as using some preset double eyelids, eyeliners, pupils, etc. to fit to the corresponding position of the face image.
  • some of the beauty effects will be fitted outside the real image, affecting the beauty effect.
  • an image cropping method can be provided to enable users to remove or fade out of range effects while beautifying, the effect of beautifying and the user experience can be greatly improved.
  • the above scenario is only for convenience of explanation.
  • the technical solution of the present disclosure can be used not only in the above scenario, but also in any scenario where the image needs to be cropped, which is not limited here.
  • an embodiment of the present disclosure provides an image cropping method for cropping a range of an image so that the effect fits more where it should fit.
  • an embodiment of the present disclosure provides an image cropping method, including: acquiring a key point in an image; setting a weight value of the key point; and cropping the image according to the weight value.
  • the cropping the image according to the weight value includes: performing color gradation processing on an area composed of key points according to weight values of at least two key points.
  • the key point forms at least one triangle;
  • the color gradation processing includes: judging a triangle where a pixel point is located; obtaining weight values of three vertices of the triangle; and determining the pixel points according to the three weight values. The color value.
  • the image is an image of a pupil in a human face;
  • the key points include standard key points and auxiliary key points;
  • the standard key points are pupil contour key points and pupil center key points;
  • the auxiliary key points and pupils The contour key points correspond to the direction outside the pupil center key points outside the pupil contour key points.
  • the method before setting the weight value of the key point, the method further includes: obtaining a pupil key point in a standard template, and fitting the pupil key point in the standard template with the auxiliary key point to obtain a new pupil image.
  • setting the weight value of the key point includes: setting the weight value of the standard key point of the pupil to 1 and the weight value of the auxiliary key to 0.
  • cropping the image according to the weight value includes: deleting the auxiliary key point for a new pupil image, and performing a color gradient process on an image area between the auxiliary key point and the standard key point.
  • the image is an image of a human eye and a pupil in a human face;
  • the key points include a key point of a contour of a human eye and a key point of a pupil contour.
  • setting the weight value of the key point and cropping the image according to the weight value includes: setting a weight value of a standard key point in the contour of the human eye to 1, and setting the weight The weight value of the standard key points outside the outline of the human eye is set to 0; the image area corresponding to the standard key point with a weight value of 0 is deleted, and the image area corresponding to the standard key point with a weight value of 1 is retained.
  • an embodiment of the present disclosure provides an image cropping device, including: an acquisition module for acquiring a key point in an image; a setting module for setting a weight value of the key point; a cropping module for The weight value is used to crop the image.
  • the cropping module includes: a gradient processing module, configured to perform color gradient processing on an area composed of key points according to weight values of at least two key points.
  • the key point forms at least one triangle;
  • the color gradation processing module includes: a position judgment module for judging a triangle where a pixel point is located; a weight value acquisition module for obtaining a weight of three vertices of the triangle Value; a color determining module, configured to determine a color value of the pixel point according to the three weight values.
  • the image is an image of a pupil in a human face;
  • the key points include standard key points and auxiliary key points;
  • the standard key points are pupil contour key points and pupil center key points;
  • the auxiliary key points and pupils The contour key points correspond to the direction outside the pupil center key points outside the pupil contour key points.
  • the image cropping device further includes a fitting module configured to obtain a pupil key point in a standard template, and apply the pupil key point in the standard template to the auxiliary key point to obtain a new pupil image. .
  • the setting module is configured to set a weight value of a standard key point of the pupil to 1 and set a weight value of the auxiliary key to 0.
  • the cropping module is configured to delete the auxiliary key point for a new pupil image, and perform color gradation processing on an image area between the auxiliary key point and the standard key point.
  • the image is an image of a human eye and a pupil in a human face;
  • the key points include a key point of a contour of a human eye and a key point of a pupil contour.
  • the setting module is configured to set a weight value of a standard key point inside the contour of the human eye to 1 and set a weight value of a standard key point outside the contour of the human eye to 0; the cropping A module for deleting an image area corresponding to a standard key point with a weight value of 0, and retaining an image area corresponding to a standard key point with a weight value of 1.
  • an electronic device including:
  • At least one processor At least one processor
  • a memory connected in communication with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute any one of the images in the first aspect. Cropping method.
  • an embodiment of the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute any one of the foregoing first aspects.
  • the image cropping method is not limited to:
  • Embodiments of the present disclosure provide an image cropping method, apparatus, electronic device, and computer-readable storage medium.
  • the face image processing method includes: acquiring a key point in an image; setting a weight value of the key point; and cropping the image according to the weight value.
  • FIG. 1 is a flowchart of a first embodiment of an image cropping method according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a second embodiment of an image cropping method according to an embodiment of the present disclosure
  • FIG. 3 is a flowchart of a third embodiment of an image cropping method according to an embodiment of the present disclosure
  • FIG. 4a is a schematic structural diagram of a first embodiment of an image cropping apparatus according to an embodiment of the present disclosure
  • 4b is a schematic structural diagram of a cropping module in the first embodiment of an image cropping device according to an embodiment of the present disclosure
  • 4c is a schematic structural diagram of a gradation processing module in a cropping module in a first embodiment of an image cropping apparatus according to an embodiment of the present disclosure
  • Embodiments 2 and 3 of an image cropping device according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an image cropping terminal according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a first embodiment of an image cropping method according to an embodiment of the present disclosure.
  • the image cropping method provided by this embodiment may be executed by an image cropping device, and the image cropping device may be implemented as software or as software In combination with hardware, the image cropping device can be integrated into a device in an image processing system, such as an image processing server or an image processing terminal device.
  • the core idea of this embodiment is to set the weights of the key points in the image and determine the gray values of the key points and the surrounding pixels according to the weight of the key points to achieve the effect of image cropping.
  • the method includes the following steps:
  • the image to be processed may be a picture or a video, which may include any information, such as depth information and texture, etc.
  • the method for acquiring the image to be processed may be obtained from a network or from a local image sensor.
  • a typical application scenario obtained from the network is monitoring, and the terminal receiving the image can process the image in the network monitoring;
  • a typical application scenario obtained from the local image sensor is a selfie, and the user passes the phone before Set the camera to take photos or videos of yourself, and the mobile phone can process the photos or videos taken.
  • the key points of an image are points in the image that have distinctive characteristics and can effectively reflect the essential characteristics of the image and can identify the target object in the image. If the target object is a human face, then the key points of the face need to be obtained. If the target image is a house, then the key points of the house need to be obtained. Take the human face as an example to illustrate the acquisition of key points.
  • the face contour mainly includes 5 parts: eyebrows, eyes, nose, mouth and cheeks, and sometimes also includes pupils and nostrils. Generally speaking, a complete description of the face contour is achieved. The number of key points required is about 60.
  • Face keypoint extraction on the image is equivalent to finding the corresponding position coordinates of each face contour keypoint in the face image, that is, keypoint positioning. This process needs to be performed based on the characteristics of the keypoint corresponding. After clearly identifying the image features of the key points, a search and comparison is performed in the image based on this feature to accurately locate the positions of the key points on the image.
  • the key points occupy only a very small area in the image (usually only a few to tens of pixels in size), the area occupied by the features corresponding to the key points on the image is also usually very limited and local.
  • the features currently used There are two ways of extraction: (1) one-dimensional range image feature extraction along the vertical contour; (2) two-dimensional range image feature extraction of key point square neighborhood.
  • ASM and AAM methods statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on.
  • the number of key points, accuracy, and speed used by the above various implementation methods are different, which are suitable for different application scenarios.
  • the number of key points may be predetermined.
  • 106 key points are preset. Such key points are called standard key points.
  • Key points may also be dynamic, such as in In addition to the preset keypoints, several keypoints are selected by interpolation or manually. Such keypoints are called auxiliary keypoints.
  • One embodiment of the solution for dynamically determining a key point is: selecting a reference point, connecting the reference point and the key point, and extending the line segment from the reference point to the key point, wherein the extension is in a predetermined proportion Extending, for example, according to 10% of the length of the line segment, the other end point on the extension line that is different from the key point is the auxiliary key point.
  • auxiliary keypoints There are many methods for determining auxiliary keypoints, which are not described here. As long as the method can be used to determine keypoints through appropriate policies according to the current standard keypoints, they can be applied in the embodiments of the present disclosure. The positions of key points are also different.
  • the user can see the position of the key point through the display device, and set the weight value of the key point.
  • the system crops the image by reading the weight of the key point, and the cropping includes the deletion of the image , Color change, color gradation, etc., the effects after cropping can be previewed in real time through the display device, which is convenient for the user to modify the weight value.
  • Users can see the position of key points and set the weight value of key points from the man-machine interface.
  • the weight value may indicate the degree to which the key point needs to be diluted.
  • the value of the weight value is between 0 and 1. If the value is 1, the key point does not change. If the value is 0, the key point is not changed.
  • the weight values are all 1 by default, and multiple sets of preset weight values can be set. Each of the preset sets of weight values represents a predetermined cropping effect, such as setting the key point weight values of the eyebrow contour to 1.
  • the weight values of other key points are all set to 0, which means that only the eyebrows are retained, and the other parts are treated as transparent.
  • the image is cropped according to the weight value set by the user, and the cropping may include any form of image processing method using the weight value.
  • color gradient processing is performed on an area composed of key points according to weight values of at least two key points.
  • the cropping includes the following processing: deleting key points with a weight value of 0, The key point with a weight of 1 is reserved, and the image area between the key point with a weight value of 0 and the key point with a weight value of 1 is subjected to color gradient processing. In this way, the user does not need to process each pixel, and can also easily delete unnecessary image parts.
  • An implementation manner of the foregoing gradient effect is: determining a distance between a pixel point and a key point, and calculating a gradient coefficient according to the distance, and multiplying the gradient coefficient by the color value of the pixel point, that is, the color after the pixel point is faded. value.
  • the two sets of key points form a torus-shaped area.
  • the key point weights of the inner ring are all 1 and the key point weights of the outer ring are all 0.
  • the circle area is processed by color gradient. Assuming a key point on the inner ring is A, a key point on the outer ring is B, and a pixel is located on the line segment AB, the gradient coefficient is calculated by the following formula:
  • is the gradient coefficient
  • PB is the length of the PB line segment
  • AB is the length of the AB line segment
  • the image is segmented using the triangulation method.
  • the distribution of key points is irregular.
  • the color value of the pixel can be determined as follows: determine the triangle where the pixel is located.
  • the triangle in which the pixel is located can be determined by the coordinates of the key point and the coordinates of the pixel; obtain the weight values of the three vertices of the triangle, and determine the color of the pixel according to the three weight values value.
  • One embodiment of determining the color value of the pixel according to the three weight values is: determining three weight values, assuming that the three weight values are a, b, and c, which correspond to the vertices A, B, and C, respectively If the pixel point is P and the color value of the pixel point P is ⁇ , then:
  • AP, BP, and CP are the distances from the pixel P to the three vertices A, B, and C.
  • the formula is actually a normalization process for AP, BP, and CP, and P is to three vertices A and B
  • the proportion of the distance between C and C is taken as the coefficient on each vertex component.
  • ⁇ value is the original color value of the pixel; when a, b, and c are 0, the color of the pixels in the triangle is set to Transparent.
  • a display interface is also provided.
  • the display interface can display all the key points.
  • the technical solution of this embodiment receives a setting command, and the setting command is input by a user through a control.
  • the setting command includes the ID of the key point and the weight value of the key point.
  • the control includes a first control and a second control. The first control is used to select a key point.
  • the method can be a drop-down menu, showing the IDs of all key points through the drop-down menu, or the first control can be set on the key point, and the user directly clicks the key point to select the key point; the second control is used to set the weight value of the key point, It is displayed only after the first control is triggered, and its implementation may be a receiving bar, and the user directly inputs the weight value through the receiving bar, or the second control may be a sliding control, such as a slider, and the user drags the slider to adjust Weights.
  • a user may select multiple key points at the same time to set weight values together to simplify operations.
  • FIG. 2 is a flowchart of a second embodiment of an image cropping method according to an embodiment of the present disclosure. As shown in FIG. 2, the method may include the following steps:
  • S201 Acquire a first key point of a pupil in a face image, where the first key point includes a standard key point and an auxiliary key point;
  • the application scenario of the embodiment is to perform pupil beauty processing on a user's pupil.
  • some auxiliary keypoints are interpolated around the standard keypoints.
  • the auxiliary key points correspond to the standard key points one by one.
  • One implementation of the interpolation is: positioning the center point of the pupil, and extending the predetermined point along the extension line of the center point and the standard key point outward, That is, key points.
  • the key points on the template correspond to the auxiliary key points, so that the template can have an enlarged effect and can cover the entire pupil image.
  • the method for obtaining the standard key points of the pupil in the face image is the same as the method described in the first embodiment, and it will not be repeated here. It is understandable that a plurality of key point positioning methods can be set in advance, and the user can select according to different scenarios. A method that fits the current scene.
  • the key points in the standard template are preset and do not need to be identified.
  • the standard template in this embodiment is a pupil template with special effects.
  • the second key point is related to the auxiliary identified in S201.
  • the key points correspond to cover the effect of the template on the pupil to obtain a new pupil image. It should be noted that the fit only maps the pupil color in the template to the real pupil image according to the corresponding relationship. It is located in step S201 The standard key points and auxiliary key points have not been deleted.
  • the user can set the weight value of the standard key point to 1, and the weight value of the auxiliary key point to 0; as an optional method, the user does not need to set the weight value, and the system will automatically Set the weight value of the auxiliary key point to 0 and display the preview effect to the user. The user can adjust the weight value according to the effect.
  • the cropping operation includes: deleting the auxiliary key point, and performing a color gradation process on the area from the auxiliary key point to the standard key point, and the color of the pixel point closer to the position of the key point with a weight value of 1 ,
  • the closer to the original color value; in this embodiment, the degree of gradation can be controlled, for example, the farther away from the position of the key point with the weight value of 1, the faster the color gradation, this can make the cropping boundary closer to the boundary of the real pupil.
  • FIG. 3 is a flowchart of a third embodiment of an image cropping method according to an embodiment of the present disclosure. As shown in FIG. 3, the method may include the following steps:
  • S305 Delete the key point with a weight value of 0, and delete the pixel color value of the image area between the key point with a weight value of 1 and the key point with a weight value of 0.
  • the weight values of the key points are used to delete the part of the pupil image covered by the eyelid.
  • the method for obtaining the key point of the human eye and the key point of the pupil is the same as the method for obtaining the key point of the first embodiment, and details are not described herein again. It should be noted that, in this embodiment, there is no specific description of other processing of the pupil and human eye images. It can be understood that the method of cropping the pupil image in the second embodiment can be combined into this embodiment to achieve a better result. Processing effect.
  • the key point of the human eye is the key point of the eye contour
  • the key point of the pupil is the key point of the pupil contour and the center point of the pupil; when the pupil in the standard template is matched with the pupil in the face image , May form a situation where the pupil extends the contour of the eye, for example, some people ’s pupils are up or down, or the contour of the eyes is not large enough to affect the effect after fitting.
  • set The weight value of key points outside the human eye is 0, the weight value of key points inside the human eye is 1, and whether the key point is outside or inside the human eye.
  • a coordinate range can be calculated from the key points of the human eye.
  • key points located outside the human eye and the image area are deleted or directly transparent, and key points located inside the human eye remain fitted after the color, which is equivalent to placing the pupil in the human eye
  • the areas outside the eyes are cropped out, closer to the state of real human eyes and pupils.
  • the essence of this embodiment is equivalent to the weight value of the key points, and a blocking layer for the pupil is made. Part of the blocking layer is transparent and partially opaque, and is covered on the pupil image to form an effect close to the natural state of the human eye.
  • FIG. 4 is a schematic structural diagram of a first embodiment of an image cropping apparatus according to an embodiment of the present disclosure. As shown in FIG. 4, the apparatus includes an obtaining module 41, a setting module 42, and a cropping module 43.
  • a setting module 42 configured to set a weight value of the key point
  • a cropping module 43 is configured to crop the image according to the weight value.
  • the cropping module 43 includes: a gradient processing module 431, configured to perform color gradient processing on an area composed of key points according to weight values of at least two key points.
  • the key points form at least one triangle;
  • the color gradient processing module 431 includes:
  • a position determining module 4311 configured to determine a triangle in which a pixel point is located
  • a weight value obtaining module 4312 configured to obtain weight values of three vertices of the triangle
  • a color determining module 4313 is configured to determine a color value of the pixel point according to the three weight values.
  • the apparatus shown in FIG. 4 can execute the method of the embodiment shown in FIG. 1.
  • FIG. 5 is a schematic structural diagram of a second embodiment of an image cropping apparatus according to an embodiment of the present disclosure. As shown in FIG. 5, based on the embodiment shown in FIG. 4, the apparatus further includes a bonding module 51.
  • the attaching module 51 attaches the second key point of the pupil in the standard template to the auxiliary key point to obtain a new pupil image.
  • the obtaining module 41 is configured to obtain a first key point of a pupil in a face image and a second key point of a pupil in a standard template, where the first key point includes a standard key point and an auxiliary key point;
  • the setting module 42 is configured to set the weight value of the standard key point to 1 and set the weight value of the auxiliary key point to 0.
  • the cropping module 43 is configured to delete the auxiliary key point and perform color gradation processing on an image area between the auxiliary key point and the standard key point.
  • the apparatus shown in FIG. 5 can execute the method of the embodiment shown in FIG. 2.
  • FIG. 5 is a schematic structural diagram of a third embodiment of an image cropping device according to an embodiment of the present disclosure. As shown in FIG. 5, the module performs the following steps:
  • the obtaining module 41 is configured to obtain a key point of a human eye and a key point of a pupil in a face image and a third key point of a pupil in a standard template;
  • the setting module 42 is configured to set a weight value of a standard key point inside the contour of the human eye to 1 and set a weight value of a standard key point outside the contour of the human eye to 0;
  • the cropping module 43 is configured to delete an image area corresponding to a standard key point with a weight value of 0, and retain an image area corresponding to a standard key point with a weight value of 1.
  • the apparatus shown in FIG. 5 can execute the method of the embodiment shown in FIG. 3.
  • FIG. 6 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in FIG. 6, the electronic device 60 according to an embodiment of the present disclosure includes a memory 61 and a processor 62.
  • the memory 61 is configured to store non-transitory computer-readable instructions.
  • the memory 61 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 62 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 60 to perform desired functions.
  • the processor 62 is configured to run the computer-readable instructions stored in the memory 61, so that the electronic device 60 executes all or part of the steps of the image cropping method of the foregoing embodiments of the present disclosure. .
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure within.
  • FIG. 7 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 70 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 71 stored thereon.
  • the non-transitory computer-readable instruction 71 is executed by a processor, all or part of the steps of the image cropping method of the foregoing embodiments of the present disclosure are performed.
  • the computer-readable storage medium 70 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media for example, CD-ROM and DVD
  • magneto-optical storage media for example, MO
  • magnetic storage media for example, tape or mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 8 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 8, the image processing terminal 800 includes the foregoing image cropping apparatus embodiment.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
  • PMPs portable multimedia players
  • navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
  • fixed terminal devices such as digital TVs, desktop computers, and the like.
  • the terminal may further include other components.
  • the image cropping terminal 800 may include a power supply unit 801, a wireless communication unit 802, an A / V (audio / video) input unit 803, a user input unit 804, a sensing unit 805, an interface unit 806, and a controller. 807, an output unit 808, a storage unit 809, and so on.
  • FIG. 8 shows a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 802 allows radio communication between the terminal 800 and a wireless communication system or network.
  • the A / V input unit 803 is used to receive audio or video signals.
  • the user input unit 804 may generate key input data according to a command input by a user to control various operations of the terminal device.
  • the sensing unit 805 detects the current state of the terminal 800, the position of the terminal 800, the presence or absence of a user's touch input to the terminal 800, the orientation of the terminal 800, the acceleration or deceleration movement and direction of the terminal 800, and the like, and generates a terminal for controlling the terminal 800 operation command or signal.
  • the interface unit 806 serves as an interface through which at least one external device can connect with the terminal 800.
  • the output unit 808 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the storage unit 809 may store software programs and the like for processing and control operations performed by the controller 807, or may temporarily store data that has been output or is to be output.
  • the storage unit 809 may include at least one type of storage medium.
  • the terminal 800 can cooperate with a network storage device that performs a storage function of the storage unit 809 through a network connection.
  • the controller 807 generally controls the overall operation of the terminal device.
  • the controller 807 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 807 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images.
  • the power supply unit 801 receives external power or internal power under the control of the controller 807 and provides appropriate power required to operate each element and component.
  • Various embodiments of the image cropping method proposed by the present disclosure may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof.
  • various embodiments of the image cropping method proposed in the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and a programmable logic device (PLD).
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA Field programmable gate array
  • processor controller
  • microcontroller microprocessor
  • electronic unit designed to perform the functions described herein and in some cases, the present disclosure
  • Various embodiments of the proposed image cropping method may be implemented in the controller 807.
  • various embodiments of the image cropping method proposed by the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed.
  • the software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the storage unit 809 and executed by the controller 807.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.
  • These decompositions and / or recombinations should be considered as equivalent solutions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

一种图像裁剪方法、装置、电子设备和计算机可读存储介质。其中该图像裁剪方法包括:获取图像中的关键点(S101);设置所述关键点的权重值(S102);根据所述权重值对所述图像进行裁剪(S103)。该方法解决了对图像贴合之后超出贴合范围之外的部分的裁剪问题,用户可以通过设置关键点的权重值来对图像进行裁剪,无需对每个像素点操作,提高了图像处理的效率。

Description

图像裁剪方法、装置、电子设备及计算机可读存储介质
交叉引用
本公开引用于2018年06月14日递交的名称为“图像裁剪方法、装置、电子设备及计算机可读存储介质”的、申请号为201810616123.6的中国专利申请,其通过引用被全部并入本申请。
技术领域
本公开涉及图像处理领域,尤其涉及一种图像裁剪方法、装置、电子设备及计算机可读存储介质。
背景技术
随着计算机技术的发展,智能终端的应用范围得到了广泛的提高,例如可以通过智能终端听音乐、玩游戏、上网聊天和拍照等。对于智能终端的拍照技术来说,其拍照像素已经达到千万像素以上,具有较高的清晰度和媲美专业相机的拍照效果。
目前在采用智能终端进行拍照时,不仅可以使用出厂时内置的拍照软件实现传统功能的拍照效果,还可以通过从网络端下载应用程序(Application,简称为:APP)来实现具有附加功能的拍照效果,例如可以实现暗光检测、美颜相机和超级像素等功能的APP。智能终端的美颜功能通常包括肤色调整、磨皮、大眼和瘦脸等美颜处理效果,能对图像中已识别出的所有人脸进行相同程度的美颜处理。
发明内容
然而目前的美颜功能,一般是使用终端设备的图像传感器采集用户的人脸信息进行处理,比如使用一些预置的双眼皮、眼线、美瞳等等贴合到人脸图像对应的位置上,但是由于贴合的时候标准图像和实际图像会有偏差,因此会有部分美颜效果被贴合到真实图像之外,影响美颜效果。
因此,如果能提供一种图像的裁剪方法,使用户在美颜的同时还能将超出范围的效果删除或者淡化,可以大大提高美颜的效果和用户的体验。实际上,上述场景仅仅为了说明方便,本公开的技术方案不仅可以用于上述场景,在任何需要对图像进行裁剪的场景下,均可使用,在此不做限定。
有鉴于此,本公开实施例提供一种图像裁剪方法,用以对图像的范围进行裁剪,使效果更贴合其应该贴合的位置。
第一方面,本公开实施例提供一种图像裁剪方法,包括:获取图像中的关键点;设置所述关键点的权重值;根据所述权重值对所述图像进行裁剪。
可选的,所述根据所述权重值对所述图像进行裁剪包括:根据至少两个关键点的权重值,对关键点组成的区域进行颜色渐变处理。
可选的,所述关键点形成至少一个三角形;所述颜色渐变处理包括:判断像素点所处的三角形;获取该三角形三个顶点的权重值;根据这三个权重值确定所述像素点的颜色值。
可选的,所述图像为人脸中瞳孔的图像;所述关键点包括标准关键点和辅助关键点;所述标准关键点为瞳孔轮廓关键点和瞳孔中心关键点;所述辅助关键点与瞳孔轮廓关键点对应,且位于所述瞳孔轮廓关键点之外远离瞳孔中心关键点的方向。
可选的,在设置所述关键点的权重值之前,还包括:获取标准模板中的瞳孔关键点,将标准模板中的瞳孔关键点与所述辅助关键点贴合,得到新的瞳孔图像。
可选的,设置所述关键点的权重值包括:将所述瞳孔的标准关键点的权重值设置为1,将所述辅助关键的权重值设置为0。
可选的,根据所述权重值对所述图像进行裁剪,包括:针对新的瞳孔图像,删除所述辅助关键点,并对辅助关键点和标准关键点之间的图像区域做颜色渐变处理。
可选的,所述图像为人脸中人眼和瞳孔的图像;所述关键点包括人眼轮廓关键点和瞳孔轮廓关键点。
可选的,所述设置所述关键点的权重值,根据所述权重值对所述图像进行裁剪,包括:将所述人眼轮廓内的标准关键点的权重值设置为1,将所述人眼轮廓外的标准关键点的权重值设置为0;删除权重值为0的标准关键点对应的图像区域,保留权重值为1的标准关键点对应的图像区域。
第二方面,本公开实施例提供一种图像裁剪装置,包括:获取模块,用于获取图像中的关键点;设置模块,用于设置所述关键点的权重值;裁剪模块,用于根据所述权重值对所述图像进行裁剪。
可选的,所述裁剪模块包括:渐变处理模块,用于根据至少两个关键点的权重值,对关键点组成的区域进行颜色渐变处理。
可选的,所述关键点形成至少一个三角形;所述颜色渐变处理模块包括:位置判断模块,用于判断像素点所处的三角形;权重值获取模块,用于获取该三角 形三个顶点的权重值;颜色确定模块,用于根据所述三个权重值确定所述像素点的颜色值。
可选的,所述图像为人脸中瞳孔的图像;所述关键点包括标准关键点和辅助关键点;所述标准关键点为瞳孔轮廓关键点和瞳孔中心关键点;所述辅助关键点与瞳孔轮廓关键点对应,且位于所述瞳孔轮廓关键点之外远离瞳孔中心关键点的方向。
可选的,所述图像裁剪装置,还包括:贴合模块,用于获取标准模板中的瞳孔关键点,将标准模板中的瞳孔关键点与所述辅助关键点贴合,得到新的瞳孔图像。
可选的,所述设置模块:用于将所述瞳孔的标准关键点的权重值设置为1,将所述辅助关键的权重值设置为0。
可选的,所述裁剪模块:用于针对新的瞳孔图像,删除所述辅助关键点,并对辅助关键点和标准关键点之间的图像区域做颜色渐变处理。
可选的,所述图像为人脸中人眼和瞳孔的图像;所述关键点包括人眼轮廓关键点和瞳孔轮廓关键点。
可选的,所述设置模块,用于将所述人眼轮廓内的标准关键点的权重值设置为1,将所述人眼轮廓外的标准关键点的权重值设置为0;所述裁剪模块,用于删除权重值为0的标准关键点对应的图像区域,保留权重值为1的标准关键点对应的图像区域。
第三方面,本公开实施例提供一种电子设备,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行前述第一方面中的任一所述图像裁剪方法。
第四方面,本公开实施例提供一种非暂态计算机可读存储介质,其中该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行前述第一方面中的任一所述图像裁剪方法。
本公开实施例提供一种图像裁剪方法、装置、电子设备和计算机可读存储介质。其中该人脸图像处理方法包括:获取图像中的关键点;设置所述关键点的权重值;根据所述权重值对所述图像进行裁剪。本公开实施例通过采取该技术方案, 解决了现有技术中对图像贴合之后超出贴合范围之外的部分的裁剪问题,用户可以通过设置关键点权重值来对图像进行裁剪,无需对每个像素点操作,提高了图像处理的效率。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的图像裁剪方法实施例一的流程图;
图2为本公开实施例提供的图像裁剪方法实施例二的流程图;
图3为本公开实施例提供的图像裁剪方法实施例三的流程图;
图4a为本公开实施例提供的图像裁剪装置实施例一的结构示意图;
图4b为本公开实施例提供的图像裁剪装置实施例一中裁剪模块的结构示意图;
图4c为本公开实施例提供的图像裁剪装置实施例一中裁剪模块中的渐变处理模块的结构示意图;
图5为本公开实施例提供的图像裁剪装置实施例二和三的结构示意图;
图6为根据本公开实施例提供的电子设备的结构示意图;
图7为根据本公开实施例提供的计算机可读存储介质的结构示意图;
图8为根据本公开实施例提供的图像裁剪终端的结构示意图。
具体实施方式
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。 应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
图1为本公开实施例提供的图像裁剪方法实施例一的流程图,本实施例提供的该图像裁剪方法可以由一图像裁剪装置来执行,该图像裁剪装置可以实现为软件,或者实现为软件和硬件的组合,该图像裁剪装置可以集成设置在图像处理系统中的某设备中,比如图像处理服务器或者图像处理终端设备中。
本实施例的核心思想是:对图像中的关键点进行权值设置,根据关键点的权值,确定关键点及其周围像素的灰度值,以达到图像裁剪的效果。如图1所示,该方法包括如下步骤:
S101,获取图像中的关键点;
为了实现对不同图像的处理,待处理图像可以是图片或者视频,其可以包括任何信息,比如深度信息和纹理等等,所述待处理图像的获取方式可以是从网络中获取或者从本地图像传感器中获取,从网络中获取的一种典型的应用场景为监控,接收图像的终端可以将网络监控中的图像做处理;从本地图像传感器中获取的一种典型应用场景为自拍,用户通过手机前置摄像头对自己拍摄照片或者视频,手机可以对拍摄的照片或视频进行处理。
获取图像中的关键点。图像的关键点是指图像中具有鲜明特性并能够有效反映图像本质特征能够标识图像中目标物体的点。如果目标物体为人脸,那么就需要获取人脸关键点,如果目标图像为一栋房子,那么就需要获取房子的关键点。以人脸为例说明关键点的获取方法,人脸轮廓主要包括眉毛、眼睛、鼻子、嘴巴和脸颊5个部分,有时还会包括瞳孔和鼻孔,一般来说实现对人脸轮廓较为完整的描述,需要关键点的个数在60个左右,如果只描述基本结构,不需要对各部位细节进行详细描述,或不需要描述脸颊,则可以相应降低关键点数目,如果 需要描述瞳孔、鼻孔或者需要更细节的五官特征,则可以增加关键点的数目。在图像上进行人脸关键点提取,相当于寻找每个人脸轮廓关键点在人脸图像中的对应位置坐标,即关键点定位,这一过程需要基于关键点对应的特征进行,在获得了能够清晰标识关键点的图像特征之后,依据此特征在图像中进行搜索比对,在图像上精确定位关键点的位置。由于关键点在图像中仅占据非常小的面积(通常只有几个至几十个像素的大小),关键点对应的特征在图像上所占据的区域通常也是非常有限和局部的,目前用的特征提取方式有两种:(1)沿轮廓垂向的一维范围图像特征提取;(2)关键点方形邻域的二维范围图像特征提取。上述两种方式有很多种实现方法,如ASM和AAM类方法、统计能量函数类方法、回归分析方法、深度学习方法、分类器方法、批量提取方法等等。上述各种实现方法所使用的关键点个数,准确度以及速度各不相同,适用于不同的应用场景。
在一个实施例中,关键点的个数可以是预定的,比如在人脸检测中,预设106个关键点,这种关键点称为标准关键点;关键点也可以是动态的,比如在预设的关键点之外通过插值或者人工选定几个关键点,这种关键点称为辅助关键点。动态确定关键点的方案的一种实施方式为:选择一个基准参考点,连接该基准参考点和关键点,并沿基准参考点到关键点的方向延长该线段,其中所述延长是按照预定比例延长,比如按照线段长度的10%进行延长,延长线上区别于关键点的另一个端点即辅助关键点。辅助关键点的确定方法很多,在此不赘述,只要是可以根据当前的标准关键点通过适当策略确定出关键点的方法,都可以应用于本公开实施例中,根据不同的策略确定出的辅助关键点的位置也不同。
S102,设置所述关键点的权重值;
当关键点确定之后,用户可以通过显示装置看到关键点的位置,并对关键点的权重值进行设置,系统通过读取关键点的权值,对图像进行裁剪,所述裁剪包括图像的删除、颜色的改变、颜色的渐变等等,所述裁剪之后的效果,可以通过显示装置实时预览,方便用户修改权重值。用户可以从人机接口界面看到关键点的位置并设置关键点的权重值。所述的权重值可以表示该关键点需要淡化的程度,权重值的取值在0到1之间,如果取值为1,则该关键点没有变化,如果取值为0,则该关键点变为透明,并计算该关键点和相邻关键点之间的像素点的像素值,如果一个关键点的权重值为0,和该关键点相邻的关键点的权重值均为1,则该关键点和其相邻关键点之间的图像的颜色呈渐变效果。权重值默认均为1,可以设置多组预设的权重值,所述预设的多组权重值中的每一组代表一预定的裁剪效果,比如将眉毛轮廓的关键点权重值均设为1,其他关键点的权重值均设为0的一组权重值,代表只保留眉毛,其他部分处理为透明,通过这些预定的一组权重值,用户可以方便的选择第一组裁剪效果对图像进行处理;另外,用户也可以针对所有关键点自定义权重值,所述自定义权重值可以是完全自定义也可 以是在预定的一组权重值上进行修改。
S103,根据所述权重值对所述图像进行裁剪;
根据用户设置的权重值,对图像进行裁剪,所述裁剪可以包括任何形式的使用了权重值的图像处理方法。在一个实施例中,根据至少两个关键点的权重值,对关键点组成的区域进行颜色渐变处理,具体实现中,所述裁剪包括以下处理:对权重值为0的关键点做删除处理,对权重为1的关键点作保留处理,对处于权重值为0的关键点和权重值为1的关键点之间的图像区域,做颜色渐变处理。这样用户不用对每一个像素点做处理,也可以很方便的删除掉不需要图像部分。
上述渐变效果的一种实现方式为:确定像素点与关键点之间的距离,根据该距离计算一个渐变系数,该渐变系数乘以该像素点的颜色值,即为该像素点淡化之后的颜色值。举例来说,两组关键点形成一个圆环状的区域,内环的关键点权重值均为1,外环的关键点权重值均为0,此时需要对内环和外环之间的圆环区域做颜色渐变处理,假设内环上一个关键点为A,外环上一个关键点为B,一个像素点位于线段AB上,则通过以下公式计算渐变系数:
α=PB/AB
其中α为渐变系数,PB表示PB线段的长度,AB表示AB线段的长度;
使用α乘以像素点的颜色值,可以得到淡化之后的颜色值。特别的,在当前这个例子中,由于内外环都是圆形,且内环上的关键点权重值均为1,外环上的关键点权重值均为0,因为对于在同一条半径上的像素点来说,渐变系数是相同的,因此提前判断需要淡化的形状,可以减少部分计算量。
在一个实施例中,图像使用三角剖分法进行剖分,在这种情况下,关键点的分布并不规则,此时可以按照如下方式确定像素点的颜色值:判断像素点所处的三角形,在此步骤中,可以通过关键点的坐标和像素点的坐标来判断像素点所处的三角形;获取该三角形的三个顶点的权重值,根据这三个权重值确定所述像素点的颜色值。所述根据这三个权重值确定所述像素点的颜色值的一个实施方式为:确定三个权重值,假设三个权重值为a、b、c,分别对应顶点A、B、C,所述像素点为P,像素点P的颜色值为β,则:
Figure PCTCN2019073073-appb-000001
其中AP、BP、CP分别为像素点P到三个顶点A、B、C的距离,该公式实际上是对AP、BP、CP做归一化处理,并将P到三个顶点A、B、C的距离的占比当作在各顶点分量上的系数。特殊情况下,如a、b、c均为1时,β值就是该像素点原来的颜色值;当a、b、c均为0时,该三角形中的像素点中的颜色均被设置为透明。
在本公开实施例中,还提供一显示界面,所述显示界面可以显示所有的关键 点;该实施例技术方案在设置关键点权重值时,接收设置命令,所述设置命令由用户通过控件输入,所述设置命令中包括关键点的ID以及关键点的权重值;在一种实施方式中,所述控件包括第一控件和第二控件,所述第一控件用于选择关键点,其实现方式可以是下拉菜单,通过下拉菜单展示所有关键点的ID,或者第一控件可以设置在关键点上,用户直接点选关键点对关键点选择;第二控件用于设置关键点的权重值,其只有在第一控件被触发之后才显示,其实现方式可以是接收栏,用户通过接收栏直接输入权重值,或者所述第二控件可以是滑动控件,比如滑块,用户拖动滑块调节权重值。在一种实施方式中,用户可以同时选择多个关键点一起设置权重值,以简化操作。
上述实施例中的技术方案,通过定位图像关键点,设置关键点的权重值,并根据所述权重值对图像进行裁剪,给用户提供了一种方便的图像裁剪方法,无需手动对每个像素点进行处理。
图2为本公开实施例提供的图像裁剪方法实施例二的流程图,如图2所示,可以包括如下步骤:
S201,获取人脸图像中瞳孔的第一关键点,所述第一关键点包括标准关键点和辅助关键点;
S202,获取标准模板中的瞳孔第二关键点;
S203,将标准模板中的瞳孔第二关键点与所述辅助关键点贴合,得到新的瞳孔图像;
S204,设置标准关键点的权重值为1,设置辅助关键点的权重值为0;
S205,针对新的瞳孔图像,删除所述辅助关键点,并对辅助关键点和标准关键点之间的图像区域做颜色渐变处理。
实施例的应用场景为对用户的瞳孔作美瞳处理,为了使模板能覆盖整个瞳孔,从图像传感器中获取人脸瞳孔图像之后,在定位出的标准关键点周围再插值出一些辅助关键点,所述辅助关键点与标准关键点一一对应,所述插值的一种实现方式是:定位瞳孔的中心点,沿中心点和标准关键点的延长线向外延长预定比例之后所定位的点,即辅助关键点。设置辅助关键点之后,模板上的关键点与辅助关键点对应,这样可以使模板有放大的效果,可以覆盖整个瞳孔图像。人脸图像中瞳孔的标准关键点的获取方法与实施例一中所述的方法相同,不再赘述,可以理解的是,可以预先设置多种关键点的定位方法,用户可以根据不同的场景选择适合当前场景的方法。
所述标准模板中的关键点是预设好的,无需进行定位识别,该实施例中的所述标准模板为带有特殊效果的瞳孔模板,所述第二关键点与S201中识别出的辅助关键点对应,以将模板的效果覆盖到瞳孔上,得到新的瞳孔图像,需要注意的是,贴合只是将模板中的瞳孔颜色按照对应关系映射到真实瞳孔图像上,步骤 S201中所定位出的标准关键点和辅助关键点并没有删除。
在该实施例中,用户可以设置标准关键点的权重值为1,辅助关键点的权重值为0;作为一种可选的方式,用户可以不用设置权重值,当贴合之后,系统会自动设置辅助关键点的权重值为0,并显示预览效果给用户,用户可以根据效果进行权重值的调整。
在该实施例中,所述裁剪操作包括:删除辅助关键点,从辅助关键点到标准关键点之间的区域做颜色渐变处理,越接近权重值为1的关键点的位置的像素点的颜色,越接近原颜色值;在该实施例中,可以对渐变程度进行控制,比如越远离权重值为1的关键点的位置,颜色渐变越快,这样可以使裁剪边界更接近真实瞳孔的边界。
图3为本公开实施例提供的图像裁剪方法实施例三的流程图,如图3所示,可以包括如下步骤:
S301,获取人脸图像中人眼的关键点和瞳孔的关键点;
S302,获取标准模板中瞳孔的第三关键点;
S303,将标准模板中的瞳孔第三关键点与所述瞳孔的关键点贴合,得到新的贴合图像;
S304,将人眼的关键点以及人眼以内的关键点的权重值设为1,将人眼以外的关键点的权重值设为0;
S305,删除权重值为0的关键点,并将权重值为1和权重值为0的关键点之间的图像区域的像素颜色值删除。
该实施例中,利用关键点的权重值,删除瞳孔图像中被眼皮覆盖住的部分。
获取人眼的关键点和瞳孔的关键点的方法与实施例一中获取关键点的额方法相同,在此不再赘述。需要注意的是,在该实施例中没有具体记载对瞳孔和人眼图像的其他处理,可以理解的是,实施例二中对瞳孔图像的裁剪方法可以结合到该实施例中已达到更好的处理效果。
在该实施例中,人眼的关键点为眼睛轮廓的关键点,瞳孔的关键点为瞳孔轮廓的关键点和瞳孔的中心点;当标准模板中的瞳孔与人脸图像中的瞳孔贴合之后,可能会形成一种瞳孔伸出眼睛轮廓的情况,比如有些人的瞳孔偏上或者偏下,或者眼睛轮廓不够大,影响贴合之后的效果;在该实施例中,在贴合之后,设置人眼之外的关键点的权重值为0,人眼之内的关键点的权重值为1,关键点位于人眼之外还是人眼之内,可以通过人眼关键点计算一个坐标范围,并通过瞳孔的关键的坐标来判断;位于人眼之外的关键点以及图像区域做删除处理或者直接透明化,位于人眼之内的关键点保持贴合之后颜色,这样相当于将瞳孔位于人眼之外的区域裁剪掉,更接近真实人眼和瞳孔的状态。该实施例的实质相当于通过关键点的权重值,制作了一个针对瞳孔的遮挡层,该遮挡层的一部分透明一部分 完全不透明,覆盖在瞳孔图像上,形成一种接近人眼自然状态的效果。
以下将详细描述本公开的一个或多个实施例的图像裁剪装置。本领域技术人员可以理解,这些图像装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图4为本公开实施例提供的图像裁剪装置实施例一的结构示意图,如图4所示,该装置包括:获取模块41、设置模块42和裁剪模块43。
获取模块41,用于获取图像中的关键点;
设置模块42,用于设置所述关键点的权重值;
裁剪模块43,用于根据所述权重值对所述图像进行裁剪。
在一个实施例中,所述裁剪模块43,包括:渐变处理模块431,用于根据至少两个关键点的权重值,对关键点组成的区域进行颜色渐变处理。
在一个实施例中,所述关键点形成至少一个三角形;所述颜色渐变处理模块431包括:
位置判断模块4311,用于判断像素点所处的三角形;
权重值获取模块4312,用于获取该三角形三个顶点的权重值;
颜色确定模块4313,用于根据所述三个权重值确定所述像素点的颜色值。
图4所示装置可以执行图1所示实施例的方法,本实施例未详细描述的部分,可参考对图1所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1所示实施例中的描述,在此不再赘述。
图5为本公开实施例提供的图像裁剪装置实施例二的结构示意图,如图5所示,在图4所示实施例基础上,该装置还包括:贴合模块51。
贴合模块51,将标准模板中的瞳孔第二关键点与所述辅助关键点贴合,得到新的瞳孔图像。
在该实施例中:
所述获取模块41,用于获取人脸图像中瞳孔的第一关键点和标准模板中的瞳孔第二关键点,所述第一关键点包括标准关键点和辅助关键点,;
所述设置模块42,用于设置标准关键点的权重值为1,设置辅助关键点的权重值为0。
所述裁剪模块43,用于删除所述辅助关键点,并对辅助关键点和标准关键点之间的图像区域做颜色渐变处理。
图5所示装置可以执行图2所示实施例的方法,本实施例未详细描述的部分,可参考对图2所示实施例的相关说明。该技术方案的执行过程和技术效果参见图2所示实施例中的描述,在此不再赘述。
图5也是本公开实施例提供的图像裁剪装置的实施例三的结构示意图,如 图5所示,所述模块执行以下步骤:
所述获取模块41,用于获取人脸图像中人眼的关键点和瞳孔的关键点以及标准模板中瞳孔的第三关键点;
贴合模块51,用于将标准模板中的瞳孔第三关键点与所述瞳孔的关键点贴合,得到新的贴合图像;
所述设置模块42,用于将所述人眼轮廓内的标准关键点的权重值设置为1,将所述人眼轮廓外的标准关键点的权重值设置为0;
所述裁剪模块43,用于删除权重值为0的标准关键点对应的图像区域,保留权重值为1的标准关键点对应的图像区域。
图5所示装置可以执行图3所示实施例的方法,本实施例未详细描述的部分,可参考对图3所示实施例的相关说明。该技术方案的执行过程和技术效果参见图3所示实施例中的描述,在此不再赘述。
图6是图示根据本公开的实施例的电子设备的硬件框图。如图6所示,根据本公开实施例的电子设备60包括存储器61和处理器62。
该存储器61用于存储非暂时性计算机可读指令。具体地,存储器61可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。
该处理器62可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备60中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器62用于运行该存储器61中存储的该计算机可读指令,使得该电子设备60执行前述的本公开各实施例的图像裁剪方法的全部或部分步骤。
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本公开的保护范围之内。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图7是图示根据本公开的实施例的计算机可读存储介质的示意图。如图7所示,根据本公开实施例的计算机可读存储介质70,其上存储有非暂时性计算机可读指令71。当该非暂时性计算机可读指令71由处理器运行时,执行前述的本公开各实施例的图像裁剪方法的全部或部分步骤。
上述计算机可读存储介质70包括但不限于:光存储介质(例如:CD-ROM 和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图8是图示根据本公开实施例的终端设备的硬件结构示意图。如图8所示,该图像处理终端800包括上述图像裁剪装置实施例。
该终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。
作为等同替换的实施方式,该终端还可以包括其他组件。如图8所示,该图像裁剪终端800可以包括电源单元801、无线通信单元802、A/V(音频/视频)输入单元803、用户输入单元804、感测单元805、接口单元806、控制器807、输出单元808和存储单元809等等。图8示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。
其中,无线通信单元802允许终端800与无线通信系统或网络之间的无线电通信。A/V输入单元803用于接收音频或视频信号。用户输入单元804可以根据用户输入的命令生成键输入数据以控制终端设备的各种操作。感测单元805检测终端800的当前状态、终端800的位置、用户对于终端800的触摸输入的有无、终端800的取向、终端800的加速或减速移动和方向等等,并且生成用于控制终端800的操作的命令或信号。接口单元806用作至少一个外部装置与终端800连接可以通过的接口。输出单元808被构造为以视觉、音频和/或触觉方式提供输出信号。存储单元809可以存储由控制器807执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储单元809可以包括至少一种类型的存储介质。而且,终端800可以与通过网络连接执行存储单元809的存储功能的网络存储装置协作。控制器807通常控制终端设备的总体操作。另外,控制器807可以包括用于再现或回放多媒体数据的多媒体模块。控制器807可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元801在控制器807的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
本公开提出的图像裁剪方法的各种实施方式可以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公开提出的图像裁剪方法的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理 器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,本公开提出的图像裁剪方法的各种实施方式可以在控制器807中实施。对于软件实施,本公开提出的图像裁剪方法的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储单元809中并且由控制器807执行。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味着描述的例子是优选的或者比其他例子更好。
还需要指出的是,在本公开的系统和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使 用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (12)

  1. 一种图像裁剪方法,包括:
    获取图像中的关键点;
    设置所述关键点的权重值;
    根据所述权重值对所述图像进行裁剪。
  2. 如权利要求1所述的图像裁剪方法,其中所述根据所述权重值对所述图像进行裁剪包括:
    根据至少两个关键点的权重值,对关键点组成的区域进行颜色渐变处理。
  3. 如权利要求2所述的图像裁剪方法,其中:
    所述关键点形成至少一个三角形;
    所述颜色渐变处理包括:
    判断像素点所处的三角形;
    获取该三角形三个顶点的权重值;
    根据这三个权重值确定所述像素点的颜色值。
  4. 如权利要求3中所述的图像裁剪方法,其中:
    所述图像为人脸中瞳孔的图像;
    所述关键点包括标准关键点和辅助关键点;
    所述标准关键点为瞳孔轮廓关键点和瞳孔中心关键点;
    所述辅助关键点与瞳孔轮廓关键点对应,且位于所述瞳孔轮廓关键点之外远离瞳孔中心关键点的方向。
  5. 如权利要求4所述的图像裁剪方法,其中在设置所述关键点的权重值之前,还包括:
    获取标准模板中的瞳孔关键点,将标准模板中的瞳孔关键点与所述辅助关键点贴合,得到新的瞳孔图像。
  6. 如权利要求5所述的图像裁剪方法,其中设置所述关键点的权重值包括:
    将所述瞳孔的标准关键点的权重值设置为1,将所述辅助关键的权重值设置为0。
  7. 如权利要求6所述的图像裁剪方法,其中根据所述权重值对所述图像进行裁剪,包括:
    针对新的瞳孔图像,删除所述辅助关键点,并对辅助关键点和标准关键点之间的图像区域做颜色渐变处理。
  8. 如权利要求1所述的图像裁剪方法,其中:
    所述图像为人脸中人眼和瞳孔的图像;
    所述关键点包括人眼轮廓关键点和瞳孔轮廓关键点。
  9. 如权利要求8所述的图像裁剪方法,其中所述设置所述关键点的权重值,根据所述权重值对所述图像进行裁剪,包括:
    将所述人眼轮廓内的标准关键点的权重值设置为1,将所述人眼轮廓外的标准关键点的权重值设置为0;
    删除权重值为0的标准关键点对应的图像区域,保留权重值为1的标准关键点对应的图像区域。
  10. 一种图像裁剪装置,包括:
    获取模块,用于获取图像中的关键点;
    设置模块,用于设置所述关键点的权重值;
    裁剪模块,用于根据所述权重值对所述图像进行裁剪。
  11. 一种电子设备,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-9任一所述的图像裁剪方法。
  12. 一种非暂态计算机可读存储介质,其中该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行权利要求1-9任一所述的图像裁剪方法。
PCT/CN2019/073073 2018-06-14 2019-01-25 图像裁剪方法、装置、电子设备及计算机可读存储介质 WO2019237747A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810616123.6A CN108921856B (zh) 2018-06-14 2018-06-14 图像裁剪方法、装置、电子设备及计算机可读存储介质
CN201810616123.6 2018-06-14

Publications (1)

Publication Number Publication Date
WO2019237747A1 true WO2019237747A1 (zh) 2019-12-19

Family

ID=64420320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073073 WO2019237747A1 (zh) 2018-06-14 2019-01-25 图像裁剪方法、装置、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108921856B (zh)
WO (1) WO2019237747A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489311A (zh) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 一种人脸美化方法、装置、电子设备及存储介质
CN111580902A (zh) * 2020-04-20 2020-08-25 微梦创科网络科技(中国)有限公司 一种基于图片分析的移动端元素定位方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921856B (zh) * 2018-06-14 2022-02-08 北京微播视界科技有限公司 图像裁剪方法、装置、电子设备及计算机可读存储介质
CN110097622B (zh) * 2019-04-23 2022-02-25 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN111626166B (zh) * 2020-05-19 2023-06-09 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质
CN112218160A (zh) * 2020-10-12 2021-01-12 北京达佳互联信息技术有限公司 视频转换方法及装置和视频转换设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205779A (zh) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 一种基于图像变形的眼部图像处理方法、系统及拍摄终端
CN107146196A (zh) * 2017-03-20 2017-09-08 深圳市金立通信设备有限公司 一种图像美颜方法及终端
CN107886484A (zh) * 2017-11-30 2018-04-06 广东欧珀移动通信有限公司 美颜方法、装置、计算机可读存储介质和电子设备
CN108012081A (zh) * 2017-12-08 2018-05-08 北京百度网讯科技有限公司 智能美颜方法、装置、终端和计算机可读存储介质
CN108921856A (zh) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 图像裁剪方法、装置、电子设备及计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4760999B1 (ja) * 2010-10-29 2011-08-31 オムロン株式会社 画像処理装置、画像処理方法、および制御プログラム
US20170163953A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for processing image containing human face
CN107680033B (zh) * 2017-09-08 2021-02-19 北京小米移动软件有限公司 图片处理方法及装置
CN107818305B (zh) * 2017-10-31 2020-09-22 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205779A (zh) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 一种基于图像变形的眼部图像处理方法、系统及拍摄终端
CN107146196A (zh) * 2017-03-20 2017-09-08 深圳市金立通信设备有限公司 一种图像美颜方法及终端
CN107886484A (zh) * 2017-11-30 2018-04-06 广东欧珀移动通信有限公司 美颜方法、装置、计算机可读存储介质和电子设备
CN108012081A (zh) * 2017-12-08 2018-05-08 北京百度网讯科技有限公司 智能美颜方法、装置、终端和计算机可读存储介质
CN108921856A (zh) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 图像裁剪方法、装置、电子设备及计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489311A (zh) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 一种人脸美化方法、装置、电子设备及存储介质
CN111489311B (zh) * 2020-04-09 2023-08-08 北京百度网讯科技有限公司 一种人脸美化方法、装置、电子设备及存储介质
CN111580902A (zh) * 2020-04-20 2020-08-25 微梦创科网络科技(中国)有限公司 一种基于图片分析的移动端元素定位方法及系统
CN111580902B (zh) * 2020-04-20 2024-01-26 微梦创科网络科技(中国)有限公司 一种基于图片分析的移动端元素定位方法及系统

Also Published As

Publication number Publication date
CN108921856B (zh) 2022-02-08
CN108921856A (zh) 2018-11-30

Similar Documents

Publication Publication Date Title
WO2019237747A1 (zh) 图像裁剪方法、装置、电子设备及计算机可读存储介质
JP7247390B2 (ja) ユーザインタフェースカメラ効果
CN110929651B (zh) 图像处理方法、装置、电子设备及存储介质
WO2020019663A1 (zh) 基于人脸的特效生成方法、装置和电子设备
WO2020108291A1 (zh) 人脸美化方法、装置、计算机设备及存储介质
WO2019109801A1 (zh) 拍摄参数的调整方法、装置、存储介质及移动终端
WO2020001014A1 (zh) 图像美化方法、装置及电子设备
JP7058760B2 (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
US9667860B2 (en) Photo composition and position guidance in a camera or augmented reality system
US9811933B2 (en) Image editing using selective editing tools
WO2020001013A1 (zh) 图像处理方法、装置、计算机可读存储介质和终端
CN110100251B (zh) 用于处理文档的设备、方法和计算机可读存储介质
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
WO2022179025A1 (zh) 图像处理方法及装置、电子设备和存储介质
JP2019207670A (ja) アバター作成ユーザインターフェース
WO2020019664A1 (zh) 基于人脸的形变图像生成方法和装置
WO2019242271A1 (zh) 图像变形方法、装置及电子设备
WO2020037923A1 (zh) 图像合成方法和装置
WO2017088605A1 (zh) 图片编辑中的图片显示控制方法及装置
WO2020134558A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2016154806A1 (zh) 一种自动变焦的方法和装置
WO2021120626A1 (zh) 一种图像处理方法、终端及计算机存储介质
WO2021218121A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN105430269B (zh) 一种应用于移动终端的拍照方法及装置
WO2021098107A1 (zh) 图像处理方法及装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19819160

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19819160

Country of ref document: EP

Kind code of ref document: A1