CN111626166B - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111626166B
CN111626166B CN202010426843.3A CN202010426843A CN111626166B CN 111626166 B CN111626166 B CN 111626166B CN 202010426843 A CN202010426843 A CN 202010426843A CN 111626166 B CN111626166 B CN 111626166B
Authority
CN
China
Prior art keywords
portrait
image
ordinate
cutting
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010426843.3A
Other languages
Chinese (zh)
Other versions
CN111626166A (en
Inventor
刘鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010426843.3A priority Critical patent/CN111626166B/en
Publication of CN111626166A publication Critical patent/CN111626166A/en
Application granted granted Critical
Publication of CN111626166B publication Critical patent/CN111626166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a portrait image; performing face detection on the portrait image to obtain a portrait position information frame; determining a cutting control parameter for cutting the portrait image according to the portrait position information frame; and cutting the portrait image according to the cutting control parameters to obtain a target image, so that the cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, more certificate photo sizes and different facial forms can be adapted, the image meeting the requirements is cut, and the intelligence of certificate photo cutting is improved.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
The existing credential photo making technology generally adopts the following two schemes: one is to prompt the user to take a picture through a human-shaped prompt box when taking a picture, wherein the human figure is required to be in the prompt box, otherwise, an error is displayed; the other is to detect the eye position through face detection, then estimate the position of other organs according to the eye position, so as to determine the position of the figure, and cut the position of the figure. The two schemes have the following disadvantages: first, the suitability for credentials of different sizes is poor. For the credentials of different occasions, different shape and size requirements are met, and especially the visa credentials are very strict. The prior art needs to be independently adapted to specific dimensions, and can not automatically adapt to different requirements; secondly, the requirements on the distance and angle between the person and the camera are high, for example, the portrait in the first scheme needs to be in the humanoid prompting frame, and the deviation from the humanoid prompting frame can be caused by the fact that the person is too close or too far from the camera; meanwhile, the angle offset also affects that the portrait cannot fill the humanoid prompt box. Thirdly, the adaptability to people with different facial forms is poor. The second scheme described above estimates the positions of other organs of the face from the eye positions, and this estimation method may cause a large error in a person having a unique face shape. Fourth, the suitability for existing photographs is poor. Because the first scheme depends on the portrait prompt box, the suitability of cutting the existing digital photo is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a storage medium, which can adapt to more certificate photo sizes and different facial forms and cut out images meeting requirements.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a portrait image;
performing face detection on the portrait image to obtain a portrait position information frame;
determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
and cutting the portrait image according to the cutting control parameters to obtain a target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition unit is used for acquiring the portrait image;
the detection unit is used for carrying out face detection on the portrait image to obtain a portrait position information frame;
the determining unit is used for determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
and the clipping unit is used for clipping the portrait image according to the clipping control parameters to obtain a target image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
By implementing the embodiment of the application, the following beneficial effects are achieved:
it can be seen that, by acquiring a portrait image, the image processing method, the device, the electronic device and the storage medium provided in the embodiments of the present application; performing face detection on the portrait image to obtain a portrait position information frame; determining a cutting control parameter for cutting the portrait image according to the portrait position information frame; and cutting the portrait image according to the cutting control parameters to obtain a target image, so that the cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, more certificate photo sizes and different facial forms can be adapted, the image meeting the requirements is cut, and the intelligence of certificate photo cutting is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 1B is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 1C is a schematic illustration of a presentation of a frame of human position information obtained by face detection according to an embodiment of the present application;
fig. 1D is a schematic illustration of a mask image according to an embodiment of the present disclosure;
fig. 1E is a schematic diagram illustrating a left-hand portrait during clipping according to an embodiment of the present application;
fig. 1F is a schematic diagram illustrating a portrait deviation right during clipping according to an embodiment of the present application;
FIG. 1G is a schematic illustration of compensating a portrait image according to an embodiment of the present application;
FIG. 1H is a schematic illustration of compensation of a mask image according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of another image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5A is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
fig. 5B is a modified structure of the image processing apparatus described in fig. 5A provided in the embodiment of the present application;
fig. 5C is a further modified structure of the image processing apparatus described in fig. 5A provided in the embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to facilitate understanding of the present solution, the following explains the specialized vocabulary related to the present solution.
A mask (mask) for distinguishing between a portrait and a black and white binary image of the background.
Portrait segmentation (portrait segmentation) identifies Portrait areas from the image, resulting in a mask at the pixel level.
The detection evaluation function, i.e. the cross-over-unit (IOU), refers to the overlapping rate of the target window generated by the model and the original mark window, and can embody the accuracy of the detection or segmentation result, and the higher the numerical value, the more accurate.
The upper left coordinate system, the coordinate system with the origin at the upper left corner and the downward extension to the right, is mostly used for computer image processing.
Rounding down, directly truncating the fractional part of the non-integer value, and retaining the integer part.
The electronic device according to the embodiment of the present application may include various handheld devices, vehicle-mounted devices, wearable devices (smart watches, smart bracelets, wireless headphones, augmented reality/virtual reality devices, smart glasses), computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), mobile Stations (MS), terminal devices (terminal devices), and so on, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The embodiments of the present application are described in detail below.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application, where the electronic device 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, and where:
the electronic device 100 may include control circuitry that may include storage and processing circuitry 110. The storage and processing circuitry 110 may include memory, such as hard drive memory, non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form solid state drives, etc.), volatile memory (e.g., static or dynamic random access memory, etc.), and the like, as embodiments of the present application are not limited. Processing circuitry in the storage and processing circuitry 110 may be used to control the operation of the electronic device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the electronic device 100, such as internet browsing applications, voice over internet protocol (Voice over Internet Protocol, VOIP) telephone call applications, email applications, media playing applications, operating system functions, and the like. Such software may be used to perform some control operations, such as image acquisition based on a camera, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functions implemented based on status indicators such as status indicators of light emitting diodes, touch event detection based on a touch sensor, functions associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in electronic device 100, to name a few.
The electronic device 100 may include an input-output circuit 150. The input-output circuit 150 is operable to cause the electronic device 100 to effect input and output of data, i.e., to allow the electronic device 100 to receive data from an external device and also to allow the electronic device 100 to output data from the electronic device 100 to an external device. The input-output circuit 150 may further include a sensor 170. The sensor 170 may include an ultrasonic fingerprint recognition module, an ambient light sensor, a proximity sensor based on light and capacitance, a touch sensor (e.g., based on a light touch sensor and/or a capacitance touch sensor, where the touch sensor may be part of a touch display screen or may be used independently as a touch sensor structure), an acceleration sensor, and other sensors, etc., where the ultrasonic fingerprint recognition module may be integrated under the screen, or the ultrasonic fingerprint recognition module may be disposed on a side or a back of an electronic device, which is not limited herein, and the ultrasonic fingerprint recognition module may be used to collect fingerprint images.
The sensor 170 may include a first camera and a second camera, the first camera may be a front camera or a rear camera, the second camera may be an infrared (Infrared Radiation, IR) camera or a visible light camera, and when the IR camera shoots, the pupil reflects infrared light, so that the IR camera shoots pupil images more accurately than the RGB camera; the visible light camera needs to perform more subsequent pupil detection, the calculation precision and accuracy are higher than those of the IR camera, the universality is better than that of the IR camera, and the calculation amount is large.
The input-output circuit 150 may also include one or more display screens, such as display screen 130. The display 130 may include one or a combination of several of a liquid crystal display, an organic light emitting diode display, an electronic ink display, a plasma display, and a display using other display technologies. Display 130 may include an array of touch sensors (i.e., display 130 may be a touch-sensitive display). The touch sensor may be a capacitive touch sensor formed of an array of transparent touch sensor electrodes, such as Indium Tin Oxide (ITO) electrodes, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, etc., as embodiments of the present application are not limited.
The electronic device 100 may also include an audio component 140. The audio component 140 may be used to provide audio input and output functionality for the electronic device 100. The audio components 140 in the electronic device 100 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sound.
The communication circuitry 120 may be used to provide the electronic device 100 with the ability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in the communication circuitry 120 may include radio frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless communication circuitry in the communication circuitry 120 may include circuitry for supporting near field communication (Near Field Communication, NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communication circuit 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and the like.
The electronic device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, levers, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes, and other status indicators, etc.
A user may control the operation of the electronic device 100 by inputting commands through the input-output circuit 150, and may use output data of the input-output circuit 150 to enable receiving status information and other outputs from the electronic device 100.
Referring to fig. 1B, fig. 1B is a flowchart of an image processing method according to an embodiment of the present application, and as shown in fig. 1B, the image processing method provided in the present application includes:
101. and acquiring a portrait image.
Wherein the portrait image is an image including a face of a user.
In a specific implementation, the image capturing may be capturing an image of a person through a photographing application on the electronic device, or may be capturing an existing image of a person in the electronic device, where the user may select an image of a person stored in an album of the electronic device, which is not limited herein.
102. And carrying out face detection on the portrait image to obtain a portrait position information frame.
The face detection may be performed on a face image to obtain a plurality of feature points of the face, where the plurality of feature points include a forehead feature point, a chin feature point, a face left edge feature point, and a face right edge feature point, so that a face position information frame may be further determined, please refer to fig. 1C, which is a schematic illustration of a face position information frame obtained by face detection, where a solid line frame in fig. 1C is a face position information frame, an upper frame of the face position information frame is a forehead upper edge, a left frame of the face position information frame is a face left edge, a right frame of the face position information frame is a face right edge, a lower frame of the face position information frame is a chin lower edge, and the face position information frame includes four vertices, and according to coordinates of the four vertices, a vertical coordinate of the forehead upper edge, a vertical coordinate of the chin feature point, the face left edge, and the face right edge may be determined. The dashed line box in fig. 1C is a face detection box set in a photographing interface of the electronic device, and is used for assisting a user in a photographing process, so as to improve the success rate of photographing a portrait image, specifically, the user can control a face area in the face detection box, thereby ensuring that a portrait image with a suitable face size is photographed.
Optionally, as shown in fig. 1C, in the embodiment of the present application, the multiple feature points of the face obtained by face detection on the portrait image may further include feature points of other areas of the face, for example, nose feature points, eye feature points, mouth feature points, and the like, where the feature points may be used to assist in processing the portrait image into credentials with higher requirements on details of the face.
103. And determining clipping control parameters for clipping the portrait image according to the portrait position information frame.
Wherein, the clipping control parameters may include: the method comprises the steps of cutting a portrait image, namely, cutting the portrait image to obtain a portrait image, wherein the relative height is the height of the image after cutting the portrait image, the relative width is the width of the image after cutting the portrait image, and in specific implementation, the sizes and positions of a face area and a body area in the portrait are relatively fixed, so that the size and the position of the face area and the body area in the portrait need to meet the requirements, the requirements on details such as the face area, the body area, the position of human eyes and the like can not be met by directly cutting the size required by the whole portrait image, and therefore, the portrait image can be cut according to the proportion requirements on the face area and the body area in the portrait, and then the operation such as stretching and scaling is carried out on the cut portrait image, and finally the portrait meeting the size requirements is obtained.
Optionally, the clipping control parameters include: the step 103 of determining the clipping control parameter for clipping the portrait image according to the portrait position information frame may include the following steps:
31. determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
32. determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
33. and determining a clipping starting point for clipping the portrait image according to the portrait position information frame and the target size information.
The target size information is final size information of the target image required by the user, and comprises a target image height and a target image width of the target image.
In specific implementation, the required target image height and target image width of the credentials of different specifications are different, for example, the target image height and target image width of the credentials of 1 inch and 2 inches are different, so that the target size information of the target image required by the current user can be determined. When a user shoots credentials through the electronic equipment, as credentials of different types have relatively fixed size requirements, the corresponding relation between the types of credentials and size information can be preset in the electronic equipment, so that the electronic equipment can determine corresponding target size information according to the types of credentials selected by the user after acquiring portrait images.
The relative height and the relative width are the height and the width of the portrait image which need to be cut to meet the height-width ratio, and the actual height and the actual width of the target image need to be stretched or compressed to meet the size requirement. In the implementation, the position and the size of the face of the user in the portrait image can be known through the portrait position information frame, and the size of the face area in the target image can be determined according to the target size information and the portrait position information frame, so that the relative height and the relative width for cutting the portrait image are obtained, and the target image cut according to the relative height and the relative width can meet the size requirement of the image.
Optionally, the portrait location information frame includes a left face edge and a right face edge, and in the step 32, determining a relative height and a relative width for cropping the portrait image according to the portrait location information frame and the target size information may include the following steps:
3201. determining a portrait width according to the left edge of the face and the right edge of the face;
3202. determining the relative height according to the portrait width;
3203. The relative width is determined from the target image height, the ratio of the target image width, and the relative height.
The face width is determined according to the left edge and the right edge of the face, and the following formula is specifically adopted:
W f =X r -X l
wherein W is f Is the width of the portrait in the portrait image, X r X is the right edge of the face l Is the left edge of the face.
Considering that the ratio of the portrait width to the icon image width is proportional to the aspect ratio of the target image size, the following is shown:
Figure BDA0002499028230000051
wherein H is P To cut the portrait image to a relative height, W P And a is a proportionality coefficient for cutting the portrait image.
Further, the relative height may be determined according to the following formula:
H P =a*W f
since the relative height and the relative width are consistent with the aspect ratio of the target image, the following is given:
Figure BDA0002499028230000052
wherein H is Pi Target image height, W, of a target image included in the target size information Pi Is a target image width of the target image included in the target size information.
Thus, the relative width can be determined according to the following formula:
Figure BDA0002499028230000053
/>
where a is a proportionality coefficient, and can be determined according to the result of performing multiple measurements in advance, a can be 1/2.6, for example, and since the pixel value of the pixel point is positive, the relative width H can be determined P After that, for H P And (5) rounding downwards to obtain the rounded relative width.
It can be seen that the torso cropping duty cycle is affected by limiting the duty cycle of the portrait width in the overall width of the target image by utilizing the aspect ratio requirements of the desired target image. Longer (high aspect ratio) target images tend to have more body torso portion under the head, while flat (low aspect ratio) target images tend to have less body torso portion under the head. Therefore, the duty ratio of the image width in the target image width is determined by utilizing the height-width ratio of the target image height and the target image width in the target size information, when the duty ratio of the image width is large, the duty ratio of the body trunk part clipping proportion can be influenced by combining the determination from the head top to the photo top position because the whole image becomes large, and therefore, the duty ratio of the image width in the target image width and the proportion of the image to the body trunk can meet the user requirements more.
The step 33 of determining a clipping start point for clipping the portrait image according to the portrait position information frame and the target size information may include the steps of:
3301. Determining a portrait center point in the horizontal direction according to the left edge of the face and the right edge of the face;
3302. determining the abscissa of the clipping starting point according to the portrait center point and the relative width;
3303. and determining a first ordinate according to the upper edge of the forehead and the relative height, and taking the first ordinate as the ordinate of the cutting starting point.
The horizontal portrait center point is determined according to the left edge and the right edge of the human face, and the abscissa of the portrait center point can be determined according to the following formula:
X C =(X r +X l )/2
wherein the position of the portrait center point is X C Since the pixel coordinate value is an integer, the position X of the center point of the obtained portrait C Downward rounding to obtain rounded X C
The abscissa of the clipping start point is determined according to the portrait center point and the relative width, and the following formula can be used:
X=X C -0.5*W P
wherein X1 is the abscissa of the clipping starting point
The first ordinate is determined according to the upper edge of the forehead and the relative height, and specifically may be according to the following formula:
Y1=Y T -g1*H P
wherein Y1 is a first ordinate, Y T For the ordinate of the upper edge of the forehead, g1 is the ratio of the distance from the forehead to the top of the target image in the cut target image to the height of the target image, in the specific implementation, the target images with different size requirements have different values of g1, for example, g1 may be 0.15.
Thus, the first ordinate may be taken as the ordinate of the clipping start point, resulting in the abscissa and the ordinate (X, Y1) of the clipping start point.
Optionally, the method may further comprise the steps of:
3304. carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait region;
3305. traversing the mask image to obtain the ordinate of the highest point of the portrait;
3306. determining a second ordinate according to the ordinate of the highest point of the portrait and the height of the target image;
3307. and determining an average value of the first ordinate and the second ordinate, rounding down the average value to obtain a rounded ordinate, and taking the rounded ordinate as the ordinate of the clipping starting point.
The image segmentation algorithm can be used for segmenting the image to obtain a mask image containing the image region, and the value of the intersection ratio IOU of the image segmentation algorithm can reach more than 0.99, so that the ordinate of the highest point of the image can be accurately determined according to the obtained mask image.
In this embodiment of the present invention, considering that the clipping start point, the relative height and the relative width are determined according to the face position information frame obtained by face detection alone, the face shape of the face may have an influence on the calculation result, for example, the situation that the face position is too low due to the large face or forehead may be caused by the large face or the small forehead may be caused by the too high face position, so after the first ordinate is determined, the second ordinate may be further determined, specifically, the face image may be subjected to face segmentation to obtain a mask image including a face area, the mask image is a black-and-white binary image for distinguishing the face and the background, the mask image is traversed to obtain the ordinate of the highest point of the face, as shown in fig. 1D, and fig. 1D is a schematic diagram illustrating the mask image, where the ordinate of the highest point of the face may be determined by distinguishing the face and the background, and further, the second ordinate may be determined according to the ordinate of the highest point of the face and the target image height.
The second ordinate is determined according to the ordinate of the highest point of the portrait and the height of the target image, and specifically, the second ordinate can be determined according to the following formula:
Y2=Y D -g2*H Pi
wherein Y2 is a second ordinate, Y D For the ordinate of the highest point of the portrait, g2 is the lower limit value of the ratio of the distance from the forehead to the top of the target image in the target image after clipping to the height of the target image, g1 can be regarded as the upper limit value of the ratio of the distance from the forehead to the top of the target image in the target image after clipping to the height of the target image, Y Pi Is the ordinate of the upper edge in the target size information of the target image.
Further, an average Y of the first and second ordinate may be determined m = (y1+y2)/2, and rounding down the average value to obtain a rounded ordinate, and taking the rounded ordinate as the ordinate of the clipping start point, thereby obtaining an abscissa and an ordinate (X), (y1+y2)/2) of the clipping start point.
Therefore, the influence of the face shape of the portrait on the cutting effect can be reduced by carrying out image segmentation on the portrait image, determining a second ordinate according to the mask image, and further taking the average value of the first ordinate and the second ordinate as the ordinate of the final cutting starting point; in addition, by combining the mask image and the portrait position information frame, the influence of the hairstyle of the portrait on the cutting effect can be reduced, so that the accuracy of determining the ordinate of the cutting starting point is improved, the calculation result of the ordinate of the cutting starting point is more stable, and the target image meets the requirements of users.
Optionally, the method further comprises:
a1, if the abscissa of the cutting starting point is larger than 0, rounding the abscissa of the cutting starting point downwards;
a2, triggering first prompt information if the abscissa of the cutting starting point is smaller than or equal to 0, wherein the first prompt information is used for prompting the left hand of the portrait;
a3, determining the abscissa of the cutting edge in the horizontal direction according to the portrait center point and the relative width; if the abscissa of the cutting edge is larger than the width of the target image, triggering second prompt information, wherein the second prompt information is used for prompting the right of the portrait;
and A4, triggering third prompt information if the sum of the ordinate of the cutting starting point and the relative height is greater than or equal to the height of the target image, wherein the third prompt information is used for prompting the downward movement of the portrait.
In this embodiment of the present application, the coordinate system adopted for clipping an image is an upper left coordinate system, and the origin of the coordinate system is at the upper left corner, so after determining the abscissa of the clipping start point, whether the abscissa of the clipping start point is greater than 0 may be determined, if yes, the abscissa of the clipping start point is rounded down to obtain the rounded abscissa, so as to clip a portrait image according to the rounded abscissa, if the abscissa of the clipping start point is less than or equal to 0, it indicates that a situation that the portrait image deviates to the left in the clipping process, as shown in fig. 1E, fig. 1E is a presentation schematic diagram of the portrait image deviating to the left in clipping, so that the first prompt information may be triggered to prompt the user to the portrait image to the left, and further make the user decide whether to re-photograph the portrait image or prompt the user to reselect the portrait image.
The horizontal coordinate of the cutting edge in the horizontal direction can be determined according to the center point and the relative width of the portrait, and the specific formula is as follows:
X B =X C +0.5*W P
wherein X is B For cutting the abscissa of the edge in the horizontal direction, and then, X can be determined B Whether or not it is larger than the target image width W Pi If the portrait image is continuously cut according to the rounded abscissa, if not, the condition that the portrait image is inclined to the right in the cutting process is indicated, and the second prompt information can be triggered to prompt the user that the portrait image is inclined to the right, as shown in fig. 1F, fig. 1F is a demonstration schematic diagram of the portrait image inclined to the right in the cutting process, so that the user can determine whether to re-shoot the portrait image or prompt the user to re-select the portrait image.
After determining the ordinate of the clipping start point, specifically, if the ordinate of the clipping start point is the first ordinate Y1, the first ordinate Y1 and the relative height H can be determined P Sum (Y1+H) P ) If the height of the portrait image is smaller than the height of the target image, if so, continuing to cut the portrait image according to the ordinate of the cutting starting point, otherwise, indicating that the portrait image is under the portrait in the cutting process, triggering a third prompting message to prompt the user that the portrait image is under, and further enabling the user to decide whether to re-shoot the portrait image or prompt the user to re-select the portrait image.
Optionally, if the ordinate of the clipping start point is the average Y of the first ordinate and the second ordinate m Can judge the average value Y m And relative height H P Sum (Y) m +H P ) If the height of the portrait image is smaller than the height of the target image, if so, continuing to cut the portrait image according to the ordinate of the cutting starting point, otherwise, indicating that the portrait image is under the portrait in the cutting process, triggering a third prompting message to prompt the user that the portrait image is under, and further enabling the user to decide whether to re-shoot the portrait image or prompt the user to re-select the portrait image.
104. And cutting the portrait image according to the cutting control parameters to obtain a target image.
In this embodiment of the present application, after determining the relative height, the relative width and the clipping start point for clipping the portrait image, clipping may be performed on the portrait image from the clipping start point to obtain a rectangular image with the relative height and the relative width, and then stretching or compressing the rectangular image. Therefore, by means of firstly cutting and then stretching or compressing the rectangular image, the error of overlarge or overlarge portrait caused by the mode of directly cutting to obtain the target image can be avoided, and the accuracy of the target image obtained by cutting is improved.
Optionally, in step 104, cropping the portrait image according to the cropping control parameter to obtain a target image may include the following steps:
41. cutting out a rectangular image downwards and rightwards according to the cutting starting point, the relative height and the relative width;
42. and stretching the rectangular image to obtain a target image with the height consistent with the target image and the width consistent with the target image.
The method comprises the steps of cutting a rectangular image downwards and rightwards according to a cutting starting point, a relative height and a relative width to obtain the image which accords with the aspect ratio, and stretching the rectangular image to obtain a target image which accords with the size of the certificate photograph.
Optionally, the method further comprises:
if the ordinate of the clipping starting point is smaller than 0, determining the filling height according to the target image height;
filling a transparent blank area above the portrait image according to the filling height to obtain a filled portrait image; filling a black area above the mask image according to the filling height to obtain a filled mask image; correcting the ordinate of the cutting starting point to obtain a corrected target ordinate;
And cutting the filled portrait image according to the corrected target ordinate and the filled mask image to obtain the target image.
If the ordinate of the clipping start point is smaller than 0, it indicates that the head area of the portrait head portrait is insufficient for clipping, and it is difficult to clip a target image in which the distance between the top of the portrait head and the top of the target image meets the requirement, and compensation needs to be performed on the head area of the portrait head portrait. Wherein the filling level is determined according to the target image height, firstly, the compensation coefficient g0 can be determined according to the ordinate of the clipping start point, specifically, if the ordinate of the clipping start point is the first ordinate Y1, the compensation coefficient g0 can be determined to be a value larger than g1, for example, if g1 is 0.15, g0 can be determined to be 0.16, if the ordinate of the clipping start point is the average value Y of the first ordinate and the second ordinate m The compensation coefficient g0 may be determined to be a value greater than g2, for example, if g2 is 0.03, g0 may be determined to be 0.04.
Then, the filling height is determined according to the compensation coefficient g0 and the target image height, and the filling height can be calculated according to the following formula:
fill height = g0.times.h Pi
Then, filling the head portrait with a height of g0 x H Pi Width W P Referring to fig. 1G, fig. 1G is a schematic diagram illustrating compensation of a portrait image, and the filling height above a mask image is g0×h Pi Width W P Referring to fig. 1H, fig. 1H is a schematic diagram illustrating compensation of a mask image, at this time, the filled portrait image and the mask image both have new image height and image width, and further, the ordinate of the clipping start point can be corrected according to the filled portrait image and the mask image, so as to obtain the corrected ordinate of the target.
Therefore, the ordinate of the cutting starting point can be more accurately determined by compensating the portrait image and the mask image, so that the target image meeting the requirements is cut out, and the intelligence of the cut image is improved.
It can be seen that in the embodiment of the present application, a portrait image is obtained; performing face detection on the portrait image to obtain a portrait position information frame; determining a cutting control parameter for cutting the portrait image according to the portrait position information frame; and cutting the portrait image according to the cutting control parameters to obtain a target image, so that the cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, more certificate photo sizes and different facial forms can be adapted, the image meeting the requirements is cut, and the intelligence of certificate photo cutting is improved.
Referring to fig. 2, fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application, and the method is applied to an electronic device, and includes:
acquiring a portrait image; performing face detection on the portrait image to obtain a portrait position information frame, wherein the portrait position information frame comprises forehead characteristic points Y T Chin characterization point Y D And a characteristic point W of the left edge of the human face l Characteristic point W of right edge of human face r Determining a portrait width W according to the left edge of the face and the right edge of the face f =X r -X l The method comprises the steps of carrying out a first treatment on the surface of the Determining relative height H from portrait width P =a*W f Determining a first ordinate y1=y from the upper forehead edge and the relative height T -g1*H P Taking the first ordinate as the ordinate Y=y1 of the clipping start point, and judging the ordinate Y and the relative height H of the clipping start point P Sum (Y+H) P ) Whether or not it is smaller than the target image height H Pi If not, the condition that the portrait image is deviated in the cutting process is indicated, and a third prompt message can be triggered to prompt the user that the portrait is deviated; if so, judging whether the ordinate Y of the clipping starting point is smaller than 0, if not, indicating that the height of the portrait image is enough for clipping; if so, determining the filling height according to the target image height, filling the portrait image and the mask image according to the filling height, and correcting the ordinate Y of the cutting starting point according to the filled portrait image and the mask image.
Determining a portrait center point X in a horizontal direction C =(X r +X l ) 2; determining target size information of the target image, wherein the target size information comprises target image height H Pi And a target image width W Pi The method comprises the steps of carrying out a first treatment on the surface of the According to the target image height H Pi Target image width H Pi Ratio and relative height H of (2) P Determining relative width W P Determining the abscissa x=x of the clipping start point according to the image center point and the relative width C -0.5*W P The method comprises the steps of carrying out a first treatment on the surface of the Determination (X) C +0.5*W P ) Whether or not it is larger than the target image width W Pi If not, indicating that the portrait image is right-shifted in the cutting process, triggering a second prompt message to prompt the user that the portrait image is right-shifted; if so, determining whether the abscissa X of the cutting starting point is larger than 0, if not, indicating that the portrait image is left-shifted in the cutting process, and triggering first prompt information to prompt a user that the portrait image is left-shifted; if yes, rounding down the abscissa of the cutting starting point to obtain the rounded abscissa.
Cutting to width W downwards and rightwards by taking a cutting starting point (X, Y) as an upper left vertex P Height is H P Stretching the rectangular image to obtain a height H with the target image Pi And a target image width W Pi Consistent target images.
The specific implementation process of the above steps may refer to corresponding descriptions in steps 101 to 104, which are not repeated here.
It can be seen that in the embodiment of the present application, a portrait image is obtained; performing face detection on the portrait image to obtain a portrait position information frame; determining the relative height, the relative width and the cutting starting point for cutting the portrait image according to the portrait position information frame; and in addition, whether the portrait image meets the requirements or not is prompted by flexibly judging whether the abscissa and the ordinate of the cutting edge meet the corresponding conditions or not, so that the intelligence of the certificate photo cutting is improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of another image processing method according to an embodiment of the present application, which is consistent with fig. 1B and is applied to an electronic device, where the method includes:
acquiring a portrait image; carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait region; traversing the mask image to obtain the ordinate Y of the highest point of the portrait D The method comprises the steps of carrying out a first treatment on the surface of the Performing face detection on the portrait image to obtain a portrait position information frame, wherein the portrait position information frame comprises forehead characteristic points Y T Chin characterization point Y D And a characteristic point W of the left edge of the human face l Characteristic point W of right edge of human face r Determining a portrait width W according to the left edge of the face and the right edge of the face f =X r -X l The method comprises the steps of carrying out a first treatment on the surface of the Determining relative height H from portrait width P =a*W f Determining a first ordinate y1=y from the upper forehead edge and the relative height T -g1*H P The method comprises the steps of carrying out a first treatment on the surface of the According to the ordinate Y of the highest point of the portrait D And target image height H Pi Determining a second ordinate y2=y D -g2*H Pi The method comprises the steps of carrying out a first treatment on the surface of the Determining an average Y of the first and second ordinate m = (y1+y2)/2; mean value Y of first ordinate and second ordinate m Ordinate y=y as the clipping start point m Judging the ordinate Y and the relative height H of the cutting starting point P Sum (Y+H) P ) If the height of the portrait image is smaller than the height of the target image, if the height of the portrait image is not smaller than the height of the target image, the condition that the portrait image is deviated in the cutting process is indicated, and a third prompt message can be triggered to prompt a user that the portrait image is deviated; if so, judging whether the ordinate Y of the clipping starting point is smaller than 0, if so, determining the filling height according to the target image height, filling the portrait image and the mask image according to the filling height, and correcting the ordinate Y of the clipping starting point according to the filled portrait image and the mask image.
Determining a portrait center point X in a horizontal direction C =(X r +X l ) 2; determining target size information of the target image, wherein the target size information comprises target image height H Pi And a target image width W Pi The method comprises the steps of carrying out a first treatment on the surface of the According to the target image height H Pi Target image width H Pi Ratio and relative height H of (2) P Determining relative width W P Determination of person's image center point and relative widthAbscissa x=x of clipping start point C -0.5*W P The method comprises the steps of carrying out a first treatment on the surface of the Determination (X) C +0.5*W P ) If the width of the portrait image is larger than the width of the target image, if the width of the portrait image is not larger than the width of the target image, the condition that the portrait image is deviated to the right in the cutting process is indicated, and a second prompt message can be triggered to prompt the user that the portrait image is deviated to the right; if so, determining whether the abscissa X of the cutting starting point is larger than 0, if not, indicating that the portrait image is left-shifted in the cutting process, and triggering first prompt information to prompt a user that the portrait image is left-shifted; if yes, rounding down the abscissa of the cutting starting point to obtain the rounded abscissa.
Cutting to width W downwards and rightwards by taking a cutting starting point (X, Y) as an upper left vertex P Height is H P Stretching the rectangular image to obtain a height H with the target image Pi And a target image width W Pi Consistent target images.
It can be seen that, by determining the clipping start point, the relative height and the relative width by combining the face position information frame obtained by face detection and the mask image obtained by face segmentation, errors possibly caused by determining the clipping start point, the relative height and the relative width by a single face position information frame or a single mask image can be avoided, if the clipping start point, the relative height and the relative width are determined solely according to the face position information frame obtained by face detection, the face shape of the face can have an influence on the calculation result, for example, the situation that a long face or a forehead is large may cause the face position to be too low, and the short face or a small forehead may cause the face position to be too high; if the relative height is determined by the mask image alone, the hairstyle of the figure may have an impact on the result of the calculation, e.g. a higher hair may result in a too low face position and an optical head may result in a too high face position. Therefore, the clipping starting point, the relative height and the relative width are determined by combining the face detection obtained face position information frame and the mask image obtained by dividing the face, so that the defects caused by the conditions can be effectively avoided.
It can be seen that in the embodiment of the present application, a portrait image is obtained; dividing the portrait image to obtain a mask image containing a portrait region; performing face detection on the portrait image to obtain a portrait position information frame; determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame, and determining a cutting starting point according to the portrait position information frame and the mask image; and in addition, whether the portrait image meets the requirements or not is prompted by flexibly judging whether the abscissa and the ordinate of the cutting edge meet the corresponding conditions or not, so that the intelligence of the certificate photo cutting is improved.
The following is a device for implementing the image processing method, and specifically the following steps are as follows:
in accordance with the foregoing, referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device includes: a processor 410, a communication interface 430, and a memory 420; and one or more programs 421, the one or more programs 421 being stored in the memory 420 and configured to be executed by the processor, the programs 421 comprising instructions for performing the steps of:
acquiring a portrait image;
performing face detection on the portrait image to obtain a portrait position information frame;
determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
and cutting the portrait image according to the cutting control parameters to obtain a target image.
In one possible example, the clipping control parameters include: the program 421 further includes instructions for performing the following steps in determining a clipping control parameter for clipping the portrait image according to the portrait location information frame:
Determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
and determining a clipping starting point for clipping the portrait image according to the portrait position information frame and the target size information.
In one possible example, the portrait location information frame includes a left face edge and a right face edge, and the program 421 includes instructions for performing the following steps in determining a relative height and a relative width to crop the portrait image based on the portrait location information frame and the target size information:
determining a portrait width according to the left edge of the face and the right edge of the face;
determining the relative height according to the portrait width;
the relative width is determined from the target image height, the ratio of the target image width, and the relative height.
In one possible example, the portrait location information frame includes a face left edge, a face right edge, and a forehead upper edge, and the program 421 includes instructions for performing the following steps in determining a clipping start point for clipping the portrait image according to the portrait location information frame and the target size information:
Determining a portrait center point in the horizontal direction according to the left edge of the face and the right edge of the face;
determining the abscissa of the clipping starting point according to the portrait center point and the relative width;
and determining a first ordinate according to the upper edge of the forehead and the relative height, and taking the first ordinate as the ordinate of the cutting starting point.
In one possible example, the program 421 further includes instructions for performing the steps of:
carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait region;
traversing the mask image to obtain the ordinate of the highest point of the portrait;
determining a second ordinate according to the ordinate of the highest point of the portrait and the height of the target image;
and determining an average value of the first ordinate and the second ordinate, rounding down the average value to obtain a rounded ordinate, and taking the rounded ordinate as the ordinate of the clipping starting point.
In one possible example, the program 421 further includes instructions for performing the steps of:
if the abscissa of the cutting starting point is greater than 0, rounding down the abscissa of the cutting starting point;
If the abscissa of the cutting starting point is smaller than or equal to 0, triggering first prompt information, wherein the first prompt information is used for prompting the left hand of the portrait;
determining the abscissa of the cutting edge in the horizontal direction according to the portrait center point and the relative width; if the abscissa of the cutting edge is larger than the width of the target image, triggering second prompt information, wherein the second prompt information is used for prompting the right of the portrait;
and if the sum of the ordinate of the cutting starting point and the relative height is greater than or equal to the height of the target image, triggering third prompt information, wherein the third prompt information is used for prompting the downward movement of the portrait.
In one possible example, the program 421 further includes instructions for performing the steps of:
if the ordinate of the clipping starting point is smaller than 0, determining the filling height according to the target image height;
filling a transparent blank area above the portrait image according to the filling height to obtain a filled portrait image; filling a black area above the mask image according to the filling height to obtain a filled mask image; correcting the ordinate of the cutting starting point to obtain a corrected target ordinate;
And cutting the filled portrait image according to the corrected target ordinate and the filled mask image to obtain the target image.
In one possible example, in the aspect of cropping the portrait image according to the cropping control parameter to obtain a target image, the program 421 includes instructions for:
cutting out a rectangular image downwards and rightwards according to the cutting starting point, the relative height and the relative width;
and stretching the rectangular image to obtain a target image with the height consistent with the target image and the width consistent with the target image.
Referring to fig. 5A, fig. 5A is a schematic structural diagram of an image processing apparatus 500 according to the present embodiment, where the image processing apparatus 500 is applied to an electronic device, the apparatus 500 includes an acquisition unit 501, a detection unit 502, a determination unit 503 and a clipping unit 504, and in this embodiment,
the acquiring unit 501 is configured to acquire a portrait image;
the detection unit 502 is configured to perform face detection on the portrait image to obtain a portrait position information frame;
a determining unit 503, configured to determine a clipping control parameter for clipping the portrait image according to the portrait position information frame;
And a clipping unit 504, configured to clip the portrait image according to the clipping control parameter, so as to obtain a target image.
Optionally, the clipping control parameters include: the determining unit 503 is specifically configured to, in terms of determining, according to the portrait location information frame, a clipping control parameter for clipping the portrait image, a relative height, a relative width, and a clipping start point for clipping the portrait image:
determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
and determining a clipping starting point for clipping the portrait image according to the portrait position information frame and the target size information.
Optionally, the portrait location information frame includes a left face edge and a right face edge, and the determining unit 503 is specifically configured to:
Determining a portrait width according to the left edge of the face and the right edge of the face;
determining the relative height according to the portrait width;
the relative width is determined from the target image height, the ratio of the target image width, and the relative height.
Optionally, the portrait location information frame includes a left face edge, a right face edge, and an upper forehead edge, and in the aspect of determining, according to the portrait location information frame and the target size information, a clipping start point for clipping the portrait image, the determining unit 503 is specifically configured to:
determining a portrait center point in the horizontal direction according to the left edge of the face and the right edge of the face;
determining the abscissa of the clipping starting point according to the portrait center point and the relative width;
and determining a first ordinate according to the upper edge of the forehead and the relative height, and taking the first ordinate as the ordinate of the cutting starting point.
Alternatively, as shown in fig. 5B, fig. 5B is a modified structure of the image processing apparatus described in fig. 5A, which may further include, compared to fig. 5A: a segmentation unit 505 and a processing unit 506, wherein,
the dividing unit 505 is configured to perform image division on the portrait image to obtain a mask image including a portrait area;
The processing unit 506 is configured to traverse the mask image to obtain an ordinate of a highest point of the portrait;
the determining unit 503 is further configured to determine a second ordinate according to the ordinate of the highest point of the portrait and the target image height; and determining an average value of the first ordinate and the second ordinate, rounding down the average value to obtain a rounded ordinate, and taking the rounded ordinate as the ordinate of the clipping starting point.
Optionally, the processing unit 506 is further configured to:
if the abscissa of the cutting starting point is greater than 0, rounding down the abscissa of the cutting starting point;
if the abscissa of the cutting starting point is smaller than or equal to 0, triggering first prompt information, wherein the first prompt information is used for prompting the left hand of the portrait;
determining the abscissa of the cutting edge in the horizontal direction according to the portrait center point and the relative width; if the abscissa of the cutting edge is larger than the width of the target image, triggering second prompt information, wherein the second prompt information is used for prompting the right of the portrait;
and if the sum of the ordinate of the cutting starting point and the relative height is greater than or equal to the height of the target image, triggering third prompt information, wherein the third prompt information is used for prompting the downward movement of the portrait.
Alternatively, as shown in fig. 5C, fig. 5C is a further modified structure of the image processing apparatus described in fig. 5A, which may further include, compared to fig. 5A: a filling unit 507, wherein,
the determining unit 503 is further configured to determine a filling height according to the target image height if the ordinate of the clipping start point is less than 0;
the filling unit 507 is configured to fill a transparent blank area above the portrait image according to the filling height, so as to obtain a filled portrait image; filling a black area above the mask image according to the filling height to obtain a filled mask image; correcting the ordinate of the cutting starting point to obtain a corrected target ordinate;
the clipping unit 504 is further configured to clip the filled portrait image according to the modified ordinate of the target and the filled mask image, so as to obtain the target image.
Optionally, in the aspect of clipping the portrait image according to the clipping control parameter to obtain a target image, the clipping unit 504 is specifically configured to:
cutting out a rectangular image downwards and rightwards according to the cutting starting point, the relative height and the relative width;
And stretching the rectangular image to obtain a target image with the height consistent with the target image and the width consistent with the target image.
It can be seen that the image processing apparatus described in the embodiments of the present application obtains a portrait image; performing face detection on the portrait image to obtain a portrait position information frame; determining a cutting control parameter for cutting the portrait image according to the portrait position information frame; and cutting the portrait image according to the cutting control parameters to obtain a target image, so that the cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, more certificate photo sizes and different facial forms can be adapted, the image meeting the requirements is cut, and the intelligence of certificate photo cutting is improved.
It may be understood that the functions of each program module of the image processing apparatus of the present embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not repeated herein.
The embodiment of the application also provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the methods described in the embodiments of the method, where the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (8)

1. An image processing method, the method comprising:
acquiring a portrait image;
performing face detection on the portrait image to obtain a portrait position information frame;
determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
Cutting the portrait image according to the cutting control parameters to obtain a target image;
wherein the clipping control parameters include: the relative height, the relative width and the cutting starting point for cutting the portrait image are determined, and cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, and the method comprises the following steps:
determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
determining a clipping starting point for clipping the portrait image according to the portrait position information frame and the target size information;
the portrait position information frame comprises a left edge of a human face, a right edge of the human face and an upper edge of a forehead, a clipping starting point for clipping the portrait image is determined according to the portrait position information frame and the target size information, and the portrait position information frame comprises:
determining a portrait center point in the horizontal direction according to the left edge of the face and the right edge of the face;
Determining the abscissa of the clipping starting point according to the portrait center point and the relative width;
determining a first ordinate according to the upper edge of the forehead and the relative height, and taking the first ordinate as the ordinate of the cutting starting point;
wherein the method further comprises:
carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait region;
traversing the mask image to obtain the ordinate of the highest point of the portrait;
determining a second ordinate according to the ordinate of the highest point of the portrait and the height of the target image;
and determining an average value of the first ordinate and the second ordinate, rounding down the average value to obtain a rounded ordinate, and taking the rounded ordinate as the ordinate of the clipping starting point.
2. The method of claim 1, wherein the portrait location information frame includes a left face edge and a right face edge, wherein determining a relative height and a relative width to crop the portrait image based on the portrait location information frame and the target size information includes:
determining a portrait width according to the left edge of the face and the right edge of the face;
Determining the relative height according to the portrait width;
the relative width is determined from the target image height, the ratio of the target image width, and the relative height.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
if the abscissa of the cutting starting point is greater than 0, rounding down the abscissa of the cutting starting point;
if the abscissa of the cutting starting point is smaller than or equal to 0, triggering first prompt information, wherein the first prompt information is used for prompting the left hand of the portrait;
determining the abscissa of the cutting edge in the horizontal direction according to the portrait center point and the relative width; if the abscissa of the cutting edge is larger than the width of the target image, triggering second prompt information, wherein the second prompt information is used for prompting the right of the portrait;
and if the sum of the ordinate of the cutting starting point and the relative height is greater than or equal to the height of the target image, triggering third prompt information, wherein the third prompt information is used for prompting the downward movement of the portrait.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
if the ordinate of the clipping starting point is smaller than 0, determining the filling height according to the target image height;
Filling a transparent blank area above the portrait image according to the filling height to obtain a filled portrait image; filling a black area above the mask image according to the filling height to obtain a filled mask image; correcting the ordinate of the cutting starting point to obtain a corrected target ordinate;
and cutting the filled portrait image according to the corrected target ordinate and the filled mask image to obtain the target image.
5. The method according to claim 1 or 2, wherein cropping the portrait image according to the cropping control parameter to obtain a target image includes:
cutting out a rectangular image downwards and rightwards according to the cutting starting point, the relative height and the relative width;
and stretching the rectangular image to obtain a target image with the height consistent with the target image and the width consistent with the target image.
6. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition unit is used for acquiring the portrait image;
the detection unit is used for carrying out face detection on the portrait image to obtain a portrait position information frame;
The determining unit is used for determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
the clipping unit is used for clipping the portrait image according to the clipping control parameters to obtain a target image;
wherein the clipping control parameters include: the relative height, the relative width and the cutting starting point for cutting the portrait image are determined, and cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, and the method comprises the following steps:
determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
determining a clipping starting point for clipping the portrait image according to the portrait position information frame and the target size information;
the portrait position information frame comprises a left edge of a human face, a right edge of the human face and an upper edge of a forehead, a clipping starting point for clipping the portrait image is determined according to the portrait position information frame and the target size information, and the portrait position information frame comprises:
Determining a portrait center point in the horizontal direction according to the left edge of the face and the right edge of the face;
determining the abscissa of the clipping starting point according to the portrait center point and the relative width;
determining a first ordinate according to the upper edge of the forehead and the relative height, and taking the first ordinate as the ordinate of the cutting starting point;
wherein, the device is also specifically used for:
carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait region;
traversing the mask image to obtain the ordinate of the highest point of the portrait;
determining a second ordinate according to the ordinate of the highest point of the portrait and the height of the target image;
and determining an average value of the first ordinate and the second ordinate, rounding down the average value to obtain a rounded ordinate, and taking the rounded ordinate as the ordinate of the clipping starting point.
7. An electronic device comprising a processor, a memory, a communication interface, and one or more programs, the memory to store one or more programs and configured to be executed by the processor, the program comprising instructions to perform the steps in the method of any of claims 1-5.
8. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN202010426843.3A 2020-05-19 2020-05-19 Image processing method, device, electronic equipment and storage medium Active CN111626166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010426843.3A CN111626166B (en) 2020-05-19 2020-05-19 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010426843.3A CN111626166B (en) 2020-05-19 2020-05-19 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111626166A CN111626166A (en) 2020-09-04
CN111626166B true CN111626166B (en) 2023-06-09

Family

ID=72258957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010426843.3A Active CN111626166B (en) 2020-05-19 2020-05-19 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111626166B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367521B (en) * 2020-10-27 2022-08-19 广州华多网络科技有限公司 Display screen content sharing method and device, computer equipment and storage medium
CN112348832A (en) * 2020-11-05 2021-02-09 Oppo广东移动通信有限公司 Picture processing method and device, electronic equipment and storage medium
CN112966578A (en) * 2021-02-23 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113538460B (en) * 2021-07-12 2022-04-08 中国科学院地质与地球物理研究所 Shale CT image cutting method and system
CN113628229B (en) * 2021-08-04 2022-12-09 展讯通信(上海)有限公司 Image cropping method and related product
CN117412158A (en) * 2023-10-09 2024-01-16 广州翼拍联盟网络技术有限公司 Method, device, equipment and medium for processing photographed original image into multiple credentials

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216881A (en) * 2007-12-28 2008-07-09 北京中星微电子有限公司 A method and device for automatic image acquisition
CN102592260A (en) * 2011-12-26 2012-07-18 广州商景网络科技有限公司 Certificate image cutting method and system
CN104361329A (en) * 2014-11-25 2015-02-18 成都品果科技有限公司 Photo cropping method and system based on face recognition
CN104392202A (en) * 2014-10-11 2015-03-04 北京中搜网络技术股份有限公司 Image-identification-based automatic cutting method
CN106020745A (en) * 2016-05-16 2016-10-12 北京清软海芯科技有限公司 Human face identification-based pancake printing path generation method and apparatus
CN108921856A (en) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 Image cropping method, apparatus, electronic equipment and computer readable storage medium
CN109413326A (en) * 2018-09-18 2019-03-01 Oppo(重庆)智能科技有限公司 Camera control method and Related product
CN110223301A (en) * 2019-03-01 2019-09-10 华为技术有限公司 A kind of image cropping method and electronic equipment
CN110222573A (en) * 2019-05-07 2019-09-10 平安科技(深圳)有限公司 Face identification method, device, computer equipment and storage medium
CN110853071A (en) * 2018-08-21 2020-02-28 Tcl集团股份有限公司 Image editing method and terminal equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216881A (en) * 2007-12-28 2008-07-09 北京中星微电子有限公司 A method and device for automatic image acquisition
CN102592260A (en) * 2011-12-26 2012-07-18 广州商景网络科技有限公司 Certificate image cutting method and system
CN104392202A (en) * 2014-10-11 2015-03-04 北京中搜网络技术股份有限公司 Image-identification-based automatic cutting method
CN104361329A (en) * 2014-11-25 2015-02-18 成都品果科技有限公司 Photo cropping method and system based on face recognition
CN106020745A (en) * 2016-05-16 2016-10-12 北京清软海芯科技有限公司 Human face identification-based pancake printing path generation method and apparatus
CN108921856A (en) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 Image cropping method, apparatus, electronic equipment and computer readable storage medium
CN110853071A (en) * 2018-08-21 2020-02-28 Tcl集团股份有限公司 Image editing method and terminal equipment
CN109413326A (en) * 2018-09-18 2019-03-01 Oppo(重庆)智能科技有限公司 Camera control method and Related product
CN110223301A (en) * 2019-03-01 2019-09-10 华为技术有限公司 A kind of image cropping method and electronic equipment
CN110222573A (en) * 2019-05-07 2019-09-10 平安科技(深圳)有限公司 Face identification method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111626166A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111626166B (en) Image processing method, device, electronic equipment and storage medium
US11115591B2 (en) Photographing method and mobile terminal
CN109600550B (en) Shooting prompting method and terminal equipment
EP2816545A2 (en) Method and apparatus for protecting eyesight
CN108076290B (en) Image processing method and mobile terminal
CN109348135A (en) Photographic method, device, storage medium and terminal device
CN110365907B (en) Photographing method and device and electronic equipment
CN111223047B (en) Image display method and electronic equipment
CN109685915B (en) Image processing method and device and mobile terminal
CN104484858B (en) Character image processing method and processing device
WO2019011091A1 (en) Photographing reminding method and device, terminal and computer storage medium
CN108307106B (en) Image processing method and device and mobile terminal
CN108683850B (en) Shooting prompting method and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
JP2016522437A (en) Image display method, image display apparatus, terminal, program, and recording medium
CN108881544B (en) Photographing method and mobile terminal
CN108377339A (en) A kind of photographic method and camera arrangement
CN108833779B (en) Shooting control method and related product
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN107888984A (en) Short video broadcasting method and device
CN111078347A (en) Screen display method and electronic equipment
CN112581358A (en) Training method of image processing model, image processing method and device
CN110555815B (en) Image processing method and electronic equipment
CN110213456B (en) Scanned document correction method, electronic device, and computer-readable storage medium
CN109639981B (en) Image shooting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant