CN111626166A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111626166A
CN111626166A CN202010426843.3A CN202010426843A CN111626166A CN 111626166 A CN111626166 A CN 111626166A CN 202010426843 A CN202010426843 A CN 202010426843A CN 111626166 A CN111626166 A CN 111626166A
Authority
CN
China
Prior art keywords
portrait
image
cutting
determining
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010426843.3A
Other languages
Chinese (zh)
Other versions
CN111626166B (en
Inventor
刘鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010426843.3A priority Critical patent/CN111626166B/en
Publication of CN111626166A publication Critical patent/CN111626166A/en
Application granted granted Critical
Publication of CN111626166B publication Critical patent/CN111626166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: by acquiring a portrait image; carrying out face detection on the portrait image to obtain a portrait position information frame; determining a cutting control parameter for cutting the portrait image according to the portrait position information frame; the portrait image is cut according to the cutting control parameters to obtain the target image, so that the cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, more identification photo sizes and different face shapes can be adapted, the image meeting the requirements is cut, and the intelligence of the identification photo cutting is improved.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The existing certificate photo making technology generally adopts the following two schemes: one is that the user is prompted to take a picture through a human-shaped prompt box when taking a picture, a portrait needs to be in the prompt box, otherwise, an error is displayed; the other method is that the positions of eyes are detected through face detection, and then the positions of other organs are estimated according to the positions of the eyes, so that the positions of the portrait are determined, and the position of the portrait is cut. The two solutions described above have the following disadvantages: first, the suitability is poor for different sizes of identification photographs. The requirements for the shape and the size of the certificate photo in different occasions are very strict, particularly for the certificate photo of the visa. The prior art needs to adapt to a specific size independently and cannot automatically adapt to different requirements; secondly, the requirements on the distance and the angle between the person and the camera are high, for example, in the first scheme, the portrait needs to be in the human-shaped prompt box, and the person is too close to or too far from the machine, so that the portrait is deviated from the human-shaped prompt box; meanwhile, the angle deviation can also influence the fact that the portrait cannot fill the human-shaped prompt box. Third, the fit to people with different facial shapes is poor. The second scheme estimates the positions of other organs of the face by the positions of the eyes, and this estimation method causes a large error in a person with a unique face shape. Fourth, the suitability for existing photographs is poor. Due to the dependence of the first scheme on the portrait prompt box, the adaptability of clipping the existing digital photo is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a storage medium, which can be adapted to more certificate photo sizes and different face shapes and cut out images meeting requirements.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
acquiring a portrait image;
carrying out face detection on the portrait image to obtain a portrait position information frame;
determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
and cutting the portrait image according to the cutting control parameters to obtain a target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
an acquisition unit for acquiring a portrait image;
the detection unit is used for carrying out face detection on the portrait image to obtain a portrait position information frame;
the determining unit is used for determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
and the cutting unit is used for cutting the portrait image according to the cutting control parameters to obtain a target image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that the image processing method, apparatus, electronic device and storage medium provided in the embodiments of the present application acquire a portrait image; carrying out face detection on the portrait image to obtain a portrait position information frame; determining a cutting control parameter for cutting the portrait image according to the portrait position information frame; the portrait image is cut according to the cutting control parameters to obtain the target image, so that the cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, more identification photo sizes and different face shapes can be adapted, the image meeting the requirements is cut, and the intelligence of the identification photo cutting is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 1B is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 1C is a schematic diagram illustrating a human face position information frame obtained through human face detection according to an embodiment of the present application;
FIG. 1D is a schematic illustration of a mask image provided by an embodiment of the present application;
FIG. 1E is a schematic diagram illustrating a left portrait during cropping according to an embodiment of the present disclosure;
FIG. 1F is a schematic diagram illustrating a portrait leaning to the right during cropping according to an embodiment of the present disclosure;
fig. 1G is a schematic illustration of an illustration for compensating an image of a human according to an embodiment of the present disclosure;
FIG. 1H is a schematic illustration of an exemplary compensation process for a mask image according to an embodiment of the disclosure;
FIG. 2 is a schematic flow chart of another image processing method provided in the embodiments of the present application;
fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5A is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5B is a modified structure of the image processing apparatus described in fig. 5A provided in an embodiment of the present application;
fig. 5C is a further modified structure of the image processing apparatus described in fig. 5A according to the embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to facilitate understanding of the present solution, the professional vocabulary related to the present solution is explained below.
And a mask (mask) for distinguishing the black and white binary image of the portrait from the background.
The portrait segmentation (segmentation) identifies the portrait area from the image, and obtains a mask at a pixel level.
The detection evaluation function, i.e. the intersection-over-Intersection (IOU), refers to the overlapping rate of the target window generated by the model and the original marked window, and can reflect the accuracy of the detection or segmentation result, and the higher the value is, the more accurate the value is.
And the coordinate system with the origin at the upper left corner and extending downwards to the right is used for computer image processing.
Rounding down, directly rounding off the fractional part of the non-integer value, and reserving the integer part.
The electronic device related to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices (smart watches, smart bracelets, wireless headsets, augmented reality/virtual reality devices, smart glasses), computing devices or other processing devices connected to wireless modems, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application, the electronic device 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, where:
the electronic device 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may include memory, such as hard drive memory, non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), volatile memory (e.g., static or dynamic random access memory, etc.), and so on, and embodiments of the present application are not limited thereto. Processing circuitry in storage and processing circuitry 110 may be used to control the operation of electronic device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the electronic device 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) telephone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functionality, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the electronic device 100, to name a few.
The electronic device 100 may include input-output circuitry 150. The input-output circuit 150 may be used to enable the electronic device 100 to input and output data, i.e., to allow the electronic device 100 to receive data from an external device and also to allow the electronic device 100 to output data from the electronic device 100 to the external device. The input-output circuit 150 may further include a sensor 170. Sensor 170 may include the ultrasonic fingerprint identification module, may also include ambient light sensor, proximity sensor based on light and electric capacity, touch sensor (for example, based on light touch sensor and/or capacitanc touch sensor, wherein, touch sensor may be a part of touch display screen, also can regard as a touch sensor structure independent utility), acceleration sensor, and other sensors etc., the ultrasonic fingerprint identification module can be integrated in the screen below, or, the ultrasonic fingerprint identification module can set up in electronic equipment's side or back, do not do the restriction here, this ultrasonic fingerprint identification module can be used to gather the fingerprint image.
The sensor 170 may include a first camera and a second camera, the first camera may be a front camera or a rear camera, the second camera may be an Infrared (IR) camera or a visible light camera, and when the IR camera takes a picture, a pupil reflects Infrared light, so that the IR camera may take a pupil image more accurately than the RGB camera; the visible light camera needs to carry out more follow-up pupil detection, and calculation accuracy and accuracy are higher than the IR camera, and the commonality is better than the IR camera, but the calculated amount is big.
Input-output circuit 150 may also include one or more display screens, such as display screen 130. The display 130 may include one or a combination of liquid crystal display, organic light emitting diode display, electronic ink display, plasma display, display using other display technologies. The display screen 130 may include an array of touch sensors (i.e., the display screen 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The electronic device 100 may also include an audio component 140. The audio component 140 may be used to provide audio input and output functionality for the electronic device 100. The audio components 140 in the electronic device 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sound.
The communication circuit 120 may be used to provide the electronic device 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The electronic device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through input-output circuitry 150 to control the operation of electronic device 100, and may use output data of input-output circuitry 150 to enable receipt of status information and other outputs from electronic device 100.
Referring to fig. 1B, fig. 1B is a schematic flowchart of an image processing method according to an embodiment of the present application, and as shown in fig. 1B, the image processing method includes:
101. and acquiring a portrait image.
The portrait image is an image including a face of a user.
In a specific implementation, the obtaining of the portrait image may be shooting the portrait image through a shooting application on the electronic device, or obtaining an existing portrait image in the electronic device, and the user may select the portrait image stored in an album of the electronic device, which is not limited herein.
102. And carrying out face detection on the portrait image to obtain a portrait position information frame.
The human face image may be subjected to face detection to obtain a plurality of feature points of the human face, where the plurality of feature points include forehead feature points, chin feature points, left edge feature points of the human face and right edge feature points of the human face, so that a human face position information frame may be further determined, referring to fig. 1C, where fig. 1C is a schematic diagram illustrating a human face position information frame obtained by face detection, where a solid line frame in fig. 1C is the human face position information frame, an upper frame of the human face position information frame is the forehead upper edge, a left frame of the human face position information frame is the left edge of the human face, a right frame of the human face position information frame is the right edge of the human face, a lower frame of the human face position information frame is the chin lower edge, the human face position information frame includes four vertices, and a longitudinal coordinate of the forehead upper edge, a longitudinal coordinate of the chin feature points, a longitudinal coordinate of the four vertices, a left edge of the human face position information frame, a left edge of the, A face left edge and a face right edge. The dashed line frame in fig. 1C is a face detection frame set in a photographing interface of the electronic device, and is used for assisting a user in a photographing process, so that a success rate of photographing a portrait image is improved, specifically, a user can control a face area in the face detection frame, and thus, a portrait image with a suitable face size is guaranteed to be photographed.
Optionally, as shown in fig. 1C, in the embodiment of the present application, among a plurality of feature points of a human face obtained by performing human face detection on a human image, detail feature points of other regions of the human face may also be included, for example, nose feature points, eye feature points, mouth feature points, and the like, where the detail feature points may be used to assist in processing the human image into a certificate photo with a higher requirement on human face details.
103. And determining a cutting control parameter for cutting the portrait image according to the portrait position information frame.
Wherein, the cutting control parameter may include: the relative height, the relative width and the cutting starting point of the portrait image, wherein the relative height is the image height after the portrait image is cut, the relative width is the image width after the portrait image is cut, in the specific implementation, the sizes of the portrait images required by the portrait images with different specifications are relatively fixed, therefore, the size and position of the face area and the body area in the identification photo are required to meet the requirements, therefore, for a photographed portrait image, it may be difficult to satisfy the requirements for details such as a face region, a body region, and even a position of human eyes by directly cutting out the required size of the entire photograph, and therefore, the portrait image can be cut according to the proportion requirement of the face area and the body area in the identification photo, and then stretching, zooming and other operations are carried out on the cut image to obtain the certificate photo which finally meets the size requirement.
Optionally, the cropping control parameters include: the step 103 of determining the cropping control parameter for cropping the portrait image according to the portrait position information frame may include the following steps:
31. determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
32. determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
33. and determining a cutting starting point for cutting the portrait image according to the portrait position information frame and the target size information.
The target size information is the final size information of the target image required by the user, and comprises the target image height and the target image width of the target image.
In specific implementation, the required target image height and target image width of the identification photos with different specifications are different, for example, the target image height and target image width of the 1 inch identification photo and the 2 inch identification photo are different, so that the target size information of the target image required by the current user can be determined. When a user shoots a certificate photo through electronic equipment, because different types of certificate photos have relatively fixed size requirements, the corresponding relation between the certificate photo type and the size information can be preset in the electronic equipment, and therefore, the electronic equipment can determine the corresponding target size information according to the certificate photo type selected by the user after acquiring a portrait image.
The relative height and the relative width refer to the height and the width which need to cut the portrait image to meet the aspect ratio, and the actual height and the actual width of the target image need to be stretched or compressed to meet the size requirement. In specific implementation, the position and the size of the face of the user in the portrait image can be known through the portrait position information frame, and the size of the face area in the target image can be determined according to the target size information and the portrait position information frame, so that the relative height and the relative width for cutting the portrait image are obtained, and thus, the target image cut according to the relative height and the relative width can meet the size requirement of the image.
Optionally, the portrait position information frame includes a left face edge and a right face edge, and in step 32, determining a relative height and a relative width for clipping the portrait image according to the portrait position information frame and the target size information may include the following steps:
3201. determining the width of a portrait according to the left edge and the right edge of the face;
3202. determining the relative height according to the portrait width;
3203. determining the relative width according to the target image height, the ratio of the target image width, and the relative height.
The portrait width is determined according to the left edge and the right edge of the face, and specifically according to the following formula:
Wf=Xr-Xl
wherein, WfIs the width, X, of the portrait in the portrait imagerIs the right edge of the face, XlIs the left edge of the face.
Considering that the ratio of the portrait width in the icon image width is in direct proportion to the aspect ratio of the target image size, as follows:
Figure BDA0002499028230000051
wherein HPRelative height for clipping the portrait image, WPAnd a is a proportionality coefficient which is the relative width of the portrait image to be cut.
Further, the relative height may be determined according to the following equation:
HP=a*Wf
since the relative height and the relative width are consistent with the aspect ratio of the target image, as follows:
Figure BDA0002499028230000052
wherein HPiFor target images included in target size informationHeight of target image, WPiIs the target image width of the target image included in the target size information.
Thus, the relative width can be determined according to the following equation:
Figure BDA0002499028230000053
where a is a proportionality coefficient and can be determined according to the results of multiple measurements in advance, a can be, for example, 1/2.6, and since the pixel value of a pixel point is a positive number, the relative width H can be determinedPThen, for HPAnd carrying out downward rounding to obtain the rounded relative width.
It can be seen that the ratio of the width of the portrait in the total width of the target image is limited by utilizing the required width-to-height ratio of the target image, and the trunk cropping ratio is further influenced. Longer (high aspect ratio) target images tend to have more of the body torso portion below the head, while flat (low aspect ratio) target images tend to have less of the body torso portion below the head. Therefore, the aspect ratio of the portrait width in the target image width is determined by utilizing the height ratio of the target image and the aspect ratio of the target image width in the target size information, when the aspect ratio of the portrait width is large, the aspect ratio of the portrait width in the target image width and the proportion of the portrait to the body trunk can be more satisfied due to the fact that the whole portrait becomes large and the aspect ratio of the portrait width in the target image width is determined in combination with the determination of the position from the top of the head to the top of the photo.
The portrait position information frame includes a left face edge, a right face edge, and an upper forehead edge, and in step 33, determining a clipping start point for clipping the portrait image according to the portrait position information frame and the target size information may include the following steps:
3301. determining a portrait central point in the horizontal direction according to the left edge and the right edge of the face;
3302. determining the abscissa of the cutting starting point according to the central point of the portrait and the relative width;
3303. and determining a first vertical coordinate according to the forehead upper edge and the relative height, and taking the first vertical coordinate as the vertical coordinate of the cutting starting point.
The portrait central point in the horizontal direction is determined according to the left edge and the right edge of the face, and the abscissa of the portrait central point can be determined according to the following formula:
XC=(Xr+Xl)/2
wherein, the position of the central point of the portrait is XCSince the pixel coordinate values are integers, the position X of the center point of the obtained portraitCCarrying out downward rounding to obtain a rounded XC
Wherein, the abscissa of the cutting starting point is determined according to the central point of the portrait and the relative width, and can be determined according to the following formula:
X=XC-0.5*WP
wherein X1 is the abscissa of the starting point of cutting
Wherein, the first ordinate is determined according to the forehead upper edge and the relative height, and can be specifically determined according to the following formula:
Y1=YT-g1*HP
wherein Y1 is a first ordinate, YTIn the specific implementation, the values of g1 of target images with different size requirements are different, for example, g1 may be 0.15.
Thus, the abscissa and ordinate (X, Y1) of the cutting start point can be obtained using the first ordinate as the ordinate of the cutting start point.
Optionally, the following steps can also be included:
3304. carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait area;
3305. traversing the mask image to obtain a vertical coordinate of the highest point of the portrait;
3306. determining a second vertical coordinate according to the vertical coordinate of the highest point of the portrait and the height of the target image;
3307. and determining an average value of the first vertical coordinate and the second vertical coordinate, rounding the average value downwards to obtain a rounded vertical coordinate, and taking the rounded vertical coordinate as the vertical coordinate of the cutting starting point.
The portrait image can be segmented through the portrait segmentation algorithm to obtain a mask image containing the portrait area, and the intersection-to-union ratio IOU value of the portrait segmentation algorithm can reach more than 0.99, so that the vertical coordinate of the highest point of the portrait can be accurately determined according to the obtained mask image.
In the embodiment of the present application, considering that the cropping start point, the relative height and the relative width are determined separately according to the portrait position information frame obtained by the face detection, the face shape of the portrait may have an influence on the calculation result, for example, the face position may be too low in the case of a long face or a large forehead, and the face position may be too high in the case of a short face or a small forehead, so that after the first vertical coordinate is determined, the second vertical coordinate may be further determined, specifically, the portrait image may be segmented to obtain a mask image including a portrait area, the mask image is a black-and-white binary image that distinguishes the portrait from a background, the mask image is traversed to obtain the vertical coordinate of the highest point of the portrait, as shown in fig. 1D, fig. 1D is a schematic diagram of a mask image, wherein the vertical coordinate of the highest point of the portrait may be determined by distinguishing the portrait from the background, and then, determining a second vertical coordinate according to the vertical coordinate of the highest point of the portrait and the height of the target image.
The second ordinate is determined according to the ordinate of the highest point of the portrait and the height of the target image, and specifically can be determined according to the following formula:
Y2=YD-g2*HPi
wherein Y2 is a second ordinate, YDG2 is the lower limit value of the ratio of the distance from the forehead to the top of the target image in the cut target image to the height of the target image, and g1 can be regarded as the distance from the forehead to the top of the target image in the cut target image and the targetUpper limit value of ratio of target image height, YPiIs the ordinate of the upper edge in the object size information of the object image.
Further, an average Y of the first ordinate and the second ordinate may be determinedmAnd (Y1+ Y2)/2, rounding the average value down to obtain a rounded ordinate, and setting the rounded ordinate as the ordinate of the cutting start point, thereby obtaining the abscissa and ordinate (X, (Y1+ Y2)/2) of the cutting start point.
Therefore, the image segmentation is carried out on the portrait image, the second vertical coordinate is determined according to the mask image, and the average value of the first vertical coordinate and the second vertical coordinate is used as the vertical coordinate of the final cutting starting point, so that the influence of the face shape of the portrait on the cutting effect can be reduced; in addition, by combining the mask image and the portrait position information frame, the influence of the hair style of the portrait on the cutting effect can be reduced, so that the accuracy of determining the vertical coordinate of the cutting starting point is improved, the calculation result of the vertical coordinate of the cutting starting point is more stable, and the target image can better meet the requirements of a user.
Optionally, the method further comprises:
a1, if the abscissa of the cutting starting point is larger than 0, rounding the abscissa of the cutting starting point downwards;
a2, if the abscissa of the cutting starting point is less than or equal to 0, triggering first prompt information, wherein the first prompt information is used for prompting that the portrait is deviated to the left;
a3, determining the abscissa of the cutting edge in the horizontal direction according to the portrait central point and the relative width; if the horizontal coordinate of the cutting edge is larger than the width of the target image, triggering second prompt information, wherein the second prompt information is used for prompting that the portrait is inclined to the right;
a4, if the sum of the vertical coordinate of the cutting starting point and the relative height is larger than or equal to the target image height, triggering third prompt information, wherein the third prompt information is used for prompting that the portrait is declined.
In the embodiment of the application, the coordinate system used for image cropping is an upper left coordinate system, and the origin of the coordinate system is at the upper left corner, so that after the abscissa of the cropping start point is determined, whether the abscissa of the cropping start point is greater than 0 is determined, if yes, the abscissa of the cropping start point is rounded downwards to obtain the rounded abscissa, so that the portrait image is cropped according to the rounded abscissa, and if the abscissa of the cropping start point is less than or equal to 0, it is indicated that the portrait image is deviated left in the cropping process, as shown in fig. 1E, fig. 1E is a demonstration diagram of the portrait being deviated left during cropping, and therefore, first prompt information can be triggered to prompt a user of the portrait being deviated left, so that the user can decide whether to shoot the portrait image again or prompt the user of reselecting the portrait image.
Wherein, can also confirm the abscissa of cutting out the edge in the horizontal direction according to portrait central point and relative width, the concrete formula is as follows:
XB=XC+0.5*WP
wherein, XBTo cut the abscissa of the edge in the horizontal direction, and then, X can be determinedBWhether or not it is larger than the target image width WPiIf the portrait image is continuously cut according to the rounded abscissa, otherwise, indicating that the portrait image has a right-side situation in the cutting process, triggering second prompt information to prompt the user that the portrait is right-side, as shown in fig. 1F, where fig. 1F is a schematic diagram illustrating that the portrait is right-side during cutting, and further enabling the user to decide whether to shoot the portrait image again or prompt the user to select the portrait image again.
After determining the ordinate of the cutting start point, specifically, if the ordinate of the cutting start point is the first ordinate Y1, the first ordinate Y1 and the relative height H can be determinedPSum (Y1+ H)P) And if not, indicating that the portrait image is deviated in the cropping process, triggering third prompt information to prompt the user that the portrait is deviated, and further enabling the user to decide whether to shoot the portrait image again or prompt the user to select the portrait image again.
Alternatively, ifThe ordinate of the cutting starting point is the average value Y of the first ordinate and the second ordinatemCan determine the average value YmAnd relative height HPThe sum of (Y)m+HP) And if not, indicating that the portrait image is deviated in the cropping process, triggering third prompt information to prompt the user that the portrait is deviated, and further enabling the user to decide whether to shoot the portrait image again or prompt the user to select the portrait image again.
104. And cutting the portrait image according to the cutting control parameters to obtain a target image.
In the embodiment of the application, after the relative height, the relative width and the cutting starting point for cutting the portrait image are determined, the portrait image can be cut from the cutting starting point, a rectangular image with the relative height and the relative width is obtained by cutting, then the rectangular image is stretched or compressed, when the rectangular image is too small, the rectangular image can be stretched, and when the rectangular image is too large, the rectangular image can be compressed, so that a target image with the image size of the target image height and the target image width is obtained. Therefore, by means of cutting first and then stretching or compressing the rectangular image, the error that the portrait is too large or too small caused by the mode of directly cutting the target image can be avoided, and the accuracy of the target image obtained by cutting is improved.
Optionally, in the step 104, the step of cutting the portrait image according to the cutting control parameter to obtain the target image may include the following steps:
41. cutting out a rectangular image downwards and rightwards according to the cutting starting point, the relative height and the relative width;
42. and stretching the rectangular image to obtain a target image with the height consistent with the target image and the width consistent with the target image.
The rectangular image cut downwards and rightwards according to the cutting starting point, the relative height and the relative width is an image which accords with the aspect ratio, and the rectangular image needs to be stretched to obtain a target image which accords with the size of the certificate photo.
Optionally, the method further comprises:
if the vertical coordinate of the cutting starting point is less than 0, determining a filling height according to the height of the target image;
filling a transparent blank area above the portrait image according to the filling height to obtain a filled portrait image; filling a black area above the mask image according to the filling height to obtain a filled mask image; correcting the vertical coordinate of the cutting starting point to obtain a corrected target vertical coordinate;
and cutting the filled portrait image according to the corrected target ordinate and the filled mask image to obtain the target image.
If the ordinate of the cutting starting point is less than 0, the head area of the portrait is not enough to be cut, the target image with the distance between the top of the portrait and the top of the target image meeting the requirement is difficult to cut, and the head area of the portrait needs to be compensated. Wherein the filling height is determined according to the target image height, first, the compensation coefficient g0 may be determined according to the ordinate of the cropping start point, specifically, if the ordinate of the cropping start point is the first ordinate Y1, the compensation coefficient g0 may be determined to be a value greater than g1, for example, if g1 is 0.15, the compensation coefficient g0 may be determined to be 0.16, if the ordinate of the cropping start point is the average Y of the first and second ordinatesmThe compensation coefficient g0 may be determined to be a value greater than g2, e.g., if g2 is 0.03, g0 may be determined to be 0.04.
Then, a filling height is determined based on the compensation factor g0 and the target image height, which can be calculated according to the following formula:
fill height g 0HPi
Then, fill height g0 × H above portrait headPiWidth of WPReferring to fig. 1G, fig. 1G is a schematic illustration of a demonstration of compensating a human image, wherein a fill height G0 is above a mask imageHPiWidth of WPReferring to fig. 1H, fig. 1H is a schematic diagram illustrating compensation of a mask image, where the filled portrait image and the mask image both have a new image height and an image width, and further, the ordinate of the cutting start point can be corrected according to the filled portrait image and the mask image, so as to obtain a corrected target ordinate.
Therefore, the portrait image and the mask image are compensated, the vertical coordinate of the cutting starting point can be more accurately determined, the target image meeting the requirements is cut, and the intelligence of the cut image is improved.
It can be seen that in the embodiment of the application, the portrait image is obtained; carrying out face detection on the portrait image to obtain a portrait position information frame; determining a cutting control parameter for cutting the portrait image according to the portrait position information frame; the portrait image is cut according to the cutting control parameters to obtain the target image, so that the cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, more identification photo sizes and different face shapes can be adapted, the image meeting the requirements is cut, and the intelligence of the identification photo cutting is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method applied to an electronic device according to an embodiment of the present disclosure, where the method includes:
acquiring a portrait image; carrying out face detection on the portrait image to obtain a portrait position information frame, wherein the portrait position information frame comprises forehead feature points YTChin characteristic point YDAnd the feature point W of the left edge of the facelThe feature point W of the right edge of the facerDetermining the width W of the portrait according to the left edge of the face and the right edge of the facef=Xr-Xl(ii) a Determining the relative height H according to the width of the portraitP=a*WfDetermining a first ordinate Y1 ═ Y from the forehead upper edge and the relative heightT-g1*HPTaking the first vertical coordinate as the vertical coordinate Y of the cutting starting point as Y1, and judging the vertical coordinate Y and the relative height H of the cutting starting pointPSum (Y + H)P) Whether or not less thanTarget image height HPiIf not, the situation that the portrait is deviated in the process of cutting the portrait image is indicated, and third prompt information can be triggered to prompt the user that the portrait is deviated; if yes, judging whether the vertical coordinate Y of the cutting starting point is smaller than 0, and if not, indicating that the height of the portrait image is enough for cutting; if yes, determining a filling height according to the height of the target image, filling the portrait image and the mask image according to the filling height, and correcting the vertical coordinate Y of the cutting starting point according to the filled portrait image and the filled mask image.
Determining a portrait centre point X in the horizontal directionC=(Xr+Xl) 2; determining target size information of the target image, the target size information including a target image height HPiAnd a target image width WPi(ii) a According to the height H of the target imagePiTarget image width HPiRatio and relative height H ofPDetermining the relative width WPDetermining the X-X abscissa of the cutting starting point according to the central point and the relative width of the portraitC-0.5*WP(ii) a Determination of (X)C+0.5*WP) Whether or not it is larger than the target image width WPiIf not, indicating that the portrait image is inclined to the right in the cutting process, triggering second prompt information to prompt the user that the portrait is inclined to the right; if yes, determining whether the abscissa X of the cutting starting point is larger than 0, if not, indicating that the portrait image is deviated to the left in the cutting process, and triggering first prompt information to prompt a user that the portrait is deviated to the left; and if so, rounding the horizontal coordinate of the cutting starting point downwards to obtain a rounded horizontal coordinate.
Cutting the top left vertex of the cutting starting point (X, Y), cutting the width W downwards and rightwardsPHeight of HPStretching the rectangular image to obtain the height H of the rectangular image to the target imagePiAnd a target image width WPiA consistent target image.
The specific implementation process of the above steps may refer to corresponding descriptions in steps 101 to 104, which are not described herein again.
It can be seen that in the embodiment of the application, the portrait image is obtained; carrying out face detection on the portrait image to obtain a portrait position information frame; determining the relative height, the relative width and a cutting starting point for cutting the portrait image according to the portrait position information frame; according to cutting out the initial point, relative height and relative width and cutting out the target image, so, can the more certificate of adaptation photograph size and different face types, cut out the image that satisfies the requirement, in addition, through judging flexibly whether the abscissa and the ordinate of cutting out the edge satisfy corresponding condition, whether satisfy the requirement and indicate to people's figure image to, improve the intellectuality that certificate photograph was cut out.
Referring to fig. 3, in accordance with the aforementioned fig. 1B, fig. 3 is a schematic flowchart of another image processing method provided in the present application, applied to an electronic device, the method including:
acquiring a portrait image; carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait area; traversing the mask image to obtain the vertical coordinate Y of the highest point of the portraitD(ii) a Carrying out face detection on the portrait image to obtain a portrait position information frame, wherein the portrait position information frame comprises forehead feature points YTChin characteristic point YDAnd the feature point W of the left edge of the facelThe feature point W of the right edge of the facerDetermining the width W of the portrait according to the left edge of the face and the right edge of the facef=Xr-Xl(ii) a Determining the relative height H according to the width of the portraitP=a*WfDetermining a first ordinate Y1 ═ Y from the forehead upper edge and the relative heightT-g1*HP(ii) a According to the ordinate Y of the highest point of the portraitDAnd a target image height HPiDetermining a second ordinate Y2 ═ YD-g2*HPi(ii) a Determining an average Y of the first and second ordinatesm(Y1+ Y2)/2; average value Y of the first ordinate and the second ordinatemThe ordinate Y as the cutting starting point is YmJudging the vertical coordinate Y and the relative height H of the cutting starting pointPSum (Y + H)P) Whether the height of the portrait image is smaller than the height of the target image or not is judged, if not, the portrait image is judged to be deviated in the cutting process, and a third prompt can be triggeredInformation for prompting the user that the portrait is lower; if so, judging whether the vertical coordinate Y of the cutting starting point is smaller than 0, if so, determining the filling height according to the height of the target image, filling the portrait image and the mask image according to the filling height, and correcting the vertical coordinate Y of the cutting starting point according to the filled portrait image and the mask image.
Determining a portrait centre point X in the horizontal directionC=(Xr+Xl) 2; determining target size information of the target image, the target size information including a target image height HPiAnd a target image width WPi(ii) a According to the height H of the target imagePiTarget image width HPiRatio and relative height H ofPDetermining the relative width WPDetermining the X-X abscissa of the cutting starting point according to the central point and the relative width of the portraitC-0.5*WP(ii) a Determination of (X)C+0.5*WP) Whether the width of the portrait image is larger than the width of the target image or not is judged, if not, the portrait image is shown to be inclined to the right in the cutting process, and second prompt information can be triggered to prompt a user that the portrait is inclined to the right; if yes, determining whether the abscissa X of the cutting starting point is larger than 0, if not, indicating that the portrait image is deviated to the left in the cutting process, and triggering first prompt information to prompt a user that the portrait is deviated to the left; and if so, rounding the horizontal coordinate of the cutting starting point downwards to obtain a rounded horizontal coordinate.
Cutting the top left vertex of the cutting starting point (X, Y), cutting the width W downwards and rightwardsPHeight of HPStretching the rectangular image to obtain the height H of the rectangular image to the target imagePiAnd a target image width WPiA consistent target image.
Therefore, the cutting starting point, the relative height and the relative width are determined by combining the portrait position information frame obtained by face detection and the mask image obtained by dividing the portrait, so that errors possibly caused by determining the cutting starting point, the relative height and the relative width through a single portrait position information frame or a single mask image can be avoided, if the cutting starting point, the relative height and the relative width are determined according to the portrait position information frame obtained by face detection, the face shape of the portrait influences the calculation result, for example, the face position is possibly too low under the condition of long face or large forehead, and the face position is possibly too high under the condition of short face or small forehead; if the relative height is determined by the mask image alone, the hairstyle of the portrait may have an impact on the calculation, for example, a high hair may result in a low face position, while an optical head may result in a high face position. Therefore, the cutting starting point, the relative height and the relative width are determined by combining the portrait position information frame obtained by face detection and the mask image obtained by portrait segmentation, and the defects caused by the above conditions can be effectively avoided.
It can be seen that in the embodiment of the application, the portrait image is obtained; carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait area; carrying out face detection on the portrait image to obtain a portrait position information frame; determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame, and determining a cutting starting point according to the portrait position information frame and the mask image; according to cutting out the initial point, relative height and relative width and cutting out the target image, so, can the more certificate of adaptation photograph size and different face types, cut out the image that satisfies the requirement, in addition, through judging flexibly whether the abscissa and the ordinate of cutting out the edge satisfy corresponding condition, whether satisfy the requirement and indicate to people's figure image to, improve the intellectuality that certificate photograph was cut out.
The following is a device for implementing the image processing method, specifically as follows:
in accordance with the above, please refer to fig. 4, where fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, the electronic device includes: a processor 410, a communication interface 430, and a memory 420; and one or more programs 421, the one or more programs 421 stored in the memory 420 and configured to be executed by the processor, the programs 421 including instructions for:
acquiring a portrait image;
carrying out face detection on the portrait image to obtain a portrait position information frame;
determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
and cutting the portrait image according to the cutting control parameters to obtain a target image.
In one possible example, the cropping control parameters include: the relative height, relative width, and cropping start point for cropping the portrait image, in terms of determining the cropping control parameters for cropping the portrait image according to the portrait position information frame, the program 421 further includes instructions for:
determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
and determining a cutting starting point for cutting the portrait image according to the portrait position information frame and the target size information.
In one possible example, the portrait position information box includes a face left edge and a face right edge, and in the determining of the relative height and the relative width for cropping the portrait image according to the portrait position information box and the target size information, the program 421 includes instructions for:
determining the width of a portrait according to the left edge and the right edge of the face;
determining the relative height according to the portrait width;
determining the relative width according to the target image height, the ratio of the target image width, and the relative height.
In one possible example, the portrait position information box includes a face left edge, a face right edge, and a forehead upper edge, and in the aspect of determining the cropping start point for cropping the portrait image according to the portrait position information box and the target size information, the program 421 includes instructions for performing the following steps:
determining a portrait central point in the horizontal direction according to the left edge and the right edge of the face;
determining the abscissa of the cutting starting point according to the central point of the portrait and the relative width;
and determining a first vertical coordinate according to the forehead upper edge and the relative height, and taking the first vertical coordinate as the vertical coordinate of the cutting starting point.
In one possible example, the program 421 further includes instructions for performing the steps of:
carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait area;
traversing the mask image to obtain a vertical coordinate of the highest point of the portrait;
determining a second vertical coordinate according to the vertical coordinate of the highest point of the portrait and the height of the target image;
and determining an average value of the first vertical coordinate and the second vertical coordinate, rounding the average value downwards to obtain a rounded vertical coordinate, and taking the rounded vertical coordinate as the vertical coordinate of the cutting starting point.
In one possible example, the program 421 further includes instructions for performing the steps of:
if the abscissa of the cutting starting point is larger than 0, rounding the abscissa of the cutting starting point downwards;
if the abscissa of the cutting starting point is less than or equal to 0, triggering first prompt information, wherein the first prompt information is used for prompting that the portrait is deviated to the left;
determining the abscissa of the cutting edge in the horizontal direction according to the central point of the portrait and the relative width; if the horizontal coordinate of the cutting edge is larger than the width of the target image, triggering second prompt information, wherein the second prompt information is used for prompting that the portrait is inclined to the right;
and if the sum of the vertical coordinate of the cutting starting point and the relative height is greater than or equal to the height of the target image, triggering third prompt information, wherein the third prompt information is used for prompting that the portrait is declined.
In one possible example, the program 421 further includes instructions for performing the steps of:
if the vertical coordinate of the cutting starting point is less than 0, determining a filling height according to the height of the target image;
filling a transparent blank area above the portrait image according to the filling height to obtain a filled portrait image; filling a black area above the mask image according to the filling height to obtain a filled mask image; correcting the vertical coordinate of the cutting starting point to obtain a corrected target vertical coordinate;
and cutting the filled portrait image according to the corrected target ordinate and the filled mask image to obtain the target image.
In one possible example, in the aspect of cropping the portrait image according to the cropping control parameter to obtain the target image, the program 421 includes instructions for:
cutting out a rectangular image downwards and rightwards according to the cutting starting point, the relative height and the relative width;
and stretching the rectangular image to obtain a target image with the height consistent with the target image and the width consistent with the target image.
Referring to fig. 5A, fig. 5A is a schematic structural diagram of an image processing apparatus 500 applied to an electronic device according to the present embodiment, where the apparatus 500 includes an obtaining unit 501, a detecting unit 502, a determining unit 503, and a cropping unit 504, where,
the acquiring unit 501 is configured to acquire a portrait image;
a detection unit 502, configured to perform face detection on the portrait image to obtain a portrait position information frame;
a determining unit 503, configured to determine, according to the portrait position information frame, a cropping control parameter for cropping the portrait image;
and a clipping unit 504, configured to clip the portrait image according to the clipping control parameter, so as to obtain a target image.
Optionally, the cropping control parameters include: the determining unit 503 is specifically configured to, in terms of the relative height, the relative width, and the cutting starting point for cutting the portrait image, determine the cutting control parameter for cutting the portrait image according to the portrait position information frame:
determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
and determining a cutting starting point for cutting the portrait image according to the portrait position information frame and the target size information.
Optionally, the portrait position information frame includes a left face edge and a right face edge, and in the aspect of determining the relative height and the relative width for clipping the portrait image according to the portrait position information frame and the target size information, the determining unit 503 is specifically configured to:
determining the width of a portrait according to the left edge and the right edge of the face;
determining the relative height according to the portrait width;
determining the relative width according to the target image height, the ratio of the target image width, and the relative height.
Optionally, the portrait position information frame includes a left face edge, a right face edge, and an upper forehead edge, and in the aspect of determining a cropping start point for cropping the portrait image according to the portrait position information frame and the target size information, the determining unit 503 is specifically configured to:
determining a portrait central point in the horizontal direction according to the left edge and the right edge of the face;
determining the abscissa of the cutting starting point according to the central point of the portrait and the relative width;
and determining a first vertical coordinate according to the forehead upper edge and the relative height, and taking the first vertical coordinate as the vertical coordinate of the cutting starting point.
Alternatively, as shown in fig. 5B, fig. 5B is a modified structure of the image processing apparatus depicted in fig. 5A, which may further include, compared with fig. 5A: a segmentation unit 505 and a processing unit 506, wherein,
the segmentation unit 505 is configured to perform portrait segmentation on the portrait image to obtain a mask image including a portrait region;
the processing unit 506 is configured to traverse the mask image to obtain a vertical coordinate of a highest point of the portrait;
the determining unit 503 is further configured to determine a second ordinate according to the ordinate of the highest point of the portrait and the height of the target image; and determining an average value of the first vertical coordinate and the second vertical coordinate, rounding the average value downwards to obtain a rounded vertical coordinate, and taking the rounded vertical coordinate as the vertical coordinate of the cutting starting point.
Optionally, the processing unit 506 is further configured to:
if the abscissa of the cutting starting point is larger than 0, rounding the abscissa of the cutting starting point downwards;
if the abscissa of the cutting starting point is less than or equal to 0, triggering first prompt information, wherein the first prompt information is used for prompting that the portrait is deviated to the left;
determining the abscissa of the cutting edge in the horizontal direction according to the central point of the portrait and the relative width; if the horizontal coordinate of the cutting edge is larger than the width of the target image, triggering second prompt information, wherein the second prompt information is used for prompting that the portrait is inclined to the right;
and if the sum of the vertical coordinate of the cutting starting point and the relative height is greater than or equal to the height of the target image, triggering third prompt information, wherein the third prompt information is used for prompting that the portrait is declined.
Alternatively, as shown in fig. 5C, fig. 5C is a further modified structure of the image processing apparatus depicted in fig. 5A, which may further include, compared with fig. 5A: the filling unit 507 is filled with a filler, wherein,
the determining unit 503 is further configured to determine a filling height according to the height of the target image if the vertical coordinate of the cutting start point is less than 0;
the filling unit 507 is configured to fill a transparent blank area above the portrait image according to the filling height, so as to obtain a filled portrait image; filling a black area above the mask image according to the filling height to obtain a filled mask image; correcting the vertical coordinate of the cutting starting point to obtain a corrected target vertical coordinate;
the cropping unit 504 is further configured to crop the filled portrait image according to the modified target ordinate and the filled mask image, so as to obtain the target image.
Optionally, in the aspect that the portrait image is cropped according to the cropping control parameter to obtain a target image, the cropping unit 504 is specifically configured to:
cutting out a rectangular image downwards and rightwards according to the cutting starting point, the relative height and the relative width;
and stretching the rectangular image to obtain a target image with the height consistent with the target image and the width consistent with the target image.
It can be seen that the image processing apparatus described in the embodiment of the present application obtains a portrait image; carrying out face detection on the portrait image to obtain a portrait position information frame; determining a cutting control parameter for cutting the portrait image according to the portrait position information frame; the portrait image is cut according to the cutting control parameters to obtain the target image, so that the cutting control parameters for cutting the portrait image are determined according to the portrait position information frame, more identification photo sizes and different face shapes can be adapted, the image meeting the requirements is cut, and the intelligence of the identification photo cutting is improved.
It is to be understood that the functions of each program module of the image processing apparatus of this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring a portrait image;
carrying out face detection on the portrait image to obtain a portrait position information frame;
determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
and cutting the portrait image according to the cutting control parameters to obtain a target image.
2. The method of claim 1, wherein the clipping control parameters comprise: the relative height, the relative width and the cutting starting point for cutting the portrait image, and the determination of the cutting control parameter for cutting the portrait image according to the portrait position information frame include:
determining target size information of the target image, wherein the target size information comprises a target image height and a target image width;
determining the relative height and the relative width for cutting the portrait image according to the portrait position information frame and the target size information;
and determining a cutting starting point for cutting the portrait image according to the portrait position information frame and the target size information.
3. The method of claim 2, wherein the portrait position information box includes a left face edge and a right face edge, and wherein determining the relative height and the relative width for cropping the portrait image according to the portrait position information box and the target size information comprises:
determining the width of a portrait according to the left edge and the right edge of the face;
determining the relative height according to the portrait width;
determining the relative width according to the target image height, the ratio of the target image width, and the relative height.
4. The method according to claim 2, wherein the portrait position information frame includes a face left edge, a face right edge, and a forehead upper edge, and the determining a cropping start point for cropping the portrait image according to the portrait position information frame and the target size information includes:
determining a portrait central point in the horizontal direction according to the left edge and the right edge of the face;
determining the abscissa of the cutting starting point according to the central point of the portrait and the relative width;
and determining a first vertical coordinate according to the forehead upper edge and the relative height, and taking the first vertical coordinate as the vertical coordinate of the cutting starting point.
5. The method of claim 4, further comprising:
carrying out portrait segmentation on the portrait image to obtain a mask image containing a portrait area;
traversing the mask image to obtain a vertical coordinate of the highest point of the portrait;
determining a second vertical coordinate according to the vertical coordinate of the highest point of the portrait and the height of the target image;
and determining an average value of the first vertical coordinate and the second vertical coordinate, rounding the average value downwards to obtain a rounded vertical coordinate, and taking the rounded vertical coordinate as the vertical coordinate of the cutting starting point.
6. The method according to claim 4 or 5, characterized in that the method further comprises:
if the abscissa of the cutting starting point is larger than 0, rounding the abscissa of the cutting starting point downwards;
if the abscissa of the cutting starting point is less than or equal to 0, triggering first prompt information, wherein the first prompt information is used for prompting that the portrait is deviated to the left;
determining the abscissa of the cutting edge in the horizontal direction according to the central point of the portrait and the relative width; if the horizontal coordinate of the cutting edge is larger than the width of the target image, triggering second prompt information, wherein the second prompt information is used for prompting that the portrait is inclined to the right;
and if the sum of the vertical coordinate of the cutting starting point and the relative height is greater than or equal to the height of the target image, triggering third prompt information, wherein the third prompt information is used for prompting that the portrait is declined.
7. The method according to claim 4 or 5, characterized in that the method further comprises:
if the vertical coordinate of the cutting starting point is less than 0, determining a filling height according to the height of the target image;
filling a transparent blank area above the portrait image according to the filling height to obtain a filled portrait image; filling a black area above the mask image according to the filling height to obtain a filled mask image; correcting the vertical coordinate of the cutting starting point to obtain a corrected target vertical coordinate;
and cutting the filled portrait image according to the corrected target ordinate and the filled mask image to obtain the target image.
8. The method according to any one of claims 2-7, wherein the cropping the portrait image according to the cropping control parameters to obtain a target image comprises:
cutting out a rectangular image downwards and rightwards according to the cutting starting point, the relative height and the relative width;
and stretching the rectangular image to obtain a target image with the height consistent with the target image and the width consistent with the target image.
9. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition unit for acquiring a portrait image;
the detection unit is used for carrying out face detection on the portrait image to obtain a portrait position information frame;
the determining unit is used for determining a cutting control parameter for cutting the portrait image according to the portrait position information frame;
and the cutting unit is used for cutting the portrait image according to the cutting control parameters to obtain a target image.
10. An electronic device comprising a processor, memory, a communication interface, and one or more programs, the memory for storing the one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-8.
11. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-8.
CN202010426843.3A 2020-05-19 2020-05-19 Image processing method, device, electronic equipment and storage medium Active CN111626166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010426843.3A CN111626166B (en) 2020-05-19 2020-05-19 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010426843.3A CN111626166B (en) 2020-05-19 2020-05-19 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111626166A true CN111626166A (en) 2020-09-04
CN111626166B CN111626166B (en) 2023-06-09

Family

ID=72258957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010426843.3A Active CN111626166B (en) 2020-05-19 2020-05-19 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111626166B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348832A (en) * 2020-11-05 2021-02-09 Oppo广东移动通信有限公司 Picture processing method and device, electronic equipment and storage medium
CN112367521A (en) * 2020-10-27 2021-02-12 广州华多网络科技有限公司 Display screen content sharing method and device, computer equipment and storage medium
CN112966578A (en) * 2021-02-23 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113538460A (en) * 2021-07-12 2021-10-22 中国科学院地质与地球物理研究所 Shale CT image cutting method and system
WO2023010661A1 (en) * 2021-08-04 2023-02-09 展讯通信(上海)有限公司 Image cropping method and related product
CN117412158A (en) * 2023-10-09 2024-01-16 广州翼拍联盟网络技术有限公司 Method, device, equipment and medium for processing photographed original image into multiple credentials

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216881A (en) * 2007-12-28 2008-07-09 北京中星微电子有限公司 A method and device for automatic image acquisition
CN102592260A (en) * 2011-12-26 2012-07-18 广州商景网络科技有限公司 Certificate image cutting method and system
CN104361329A (en) * 2014-11-25 2015-02-18 成都品果科技有限公司 Photo cropping method and system based on face recognition
CN104392202A (en) * 2014-10-11 2015-03-04 北京中搜网络技术股份有限公司 Image-identification-based automatic cutting method
CN106020745A (en) * 2016-05-16 2016-10-12 北京清软海芯科技有限公司 Human face identification-based pancake printing path generation method and apparatus
CN108921856A (en) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 Image cropping method, apparatus, electronic equipment and computer readable storage medium
CN109413326A (en) * 2018-09-18 2019-03-01 Oppo(重庆)智能科技有限公司 Camera control method and Related product
CN110222573A (en) * 2019-05-07 2019-09-10 平安科技(深圳)有限公司 Face identification method, device, computer equipment and storage medium
CN110223301A (en) * 2019-03-01 2019-09-10 华为技术有限公司 A kind of image cropping method and electronic equipment
CN110853071A (en) * 2018-08-21 2020-02-28 Tcl集团股份有限公司 Image editing method and terminal equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216881A (en) * 2007-12-28 2008-07-09 北京中星微电子有限公司 A method and device for automatic image acquisition
CN102592260A (en) * 2011-12-26 2012-07-18 广州商景网络科技有限公司 Certificate image cutting method and system
CN104392202A (en) * 2014-10-11 2015-03-04 北京中搜网络技术股份有限公司 Image-identification-based automatic cutting method
CN104361329A (en) * 2014-11-25 2015-02-18 成都品果科技有限公司 Photo cropping method and system based on face recognition
CN106020745A (en) * 2016-05-16 2016-10-12 北京清软海芯科技有限公司 Human face identification-based pancake printing path generation method and apparatus
CN108921856A (en) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 Image cropping method, apparatus, electronic equipment and computer readable storage medium
CN110853071A (en) * 2018-08-21 2020-02-28 Tcl集团股份有限公司 Image editing method and terminal equipment
CN109413326A (en) * 2018-09-18 2019-03-01 Oppo(重庆)智能科技有限公司 Camera control method and Related product
CN110223301A (en) * 2019-03-01 2019-09-10 华为技术有限公司 A kind of image cropping method and electronic equipment
CN110222573A (en) * 2019-05-07 2019-09-10 平安科技(深圳)有限公司 Face identification method, device, computer equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367521A (en) * 2020-10-27 2021-02-12 广州华多网络科技有限公司 Display screen content sharing method and device, computer equipment and storage medium
CN112348832A (en) * 2020-11-05 2021-02-09 Oppo广东移动通信有限公司 Picture processing method and device, electronic equipment and storage medium
CN112966578A (en) * 2021-02-23 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113538460A (en) * 2021-07-12 2021-10-22 中国科学院地质与地球物理研究所 Shale CT image cutting method and system
CN113538460B (en) * 2021-07-12 2022-04-08 中国科学院地质与地球物理研究所 Shale CT image cutting method and system
WO2023010661A1 (en) * 2021-08-04 2023-02-09 展讯通信(上海)有限公司 Image cropping method and related product
CN117412158A (en) * 2023-10-09 2024-01-16 广州翼拍联盟网络技术有限公司 Method, device, equipment and medium for processing photographed original image into multiple credentials

Also Published As

Publication number Publication date
CN111626166B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN111626166A (en) Image processing method, image processing device, electronic equipment and storage medium
AU2022200580B2 (en) Photographing method, photographing apparatus, and mobile terminal
US11115591B2 (en) Photographing method and mobile terminal
CN107767333B (en) Method and equipment for beautifying and photographing and computer storage medium
CN110136142A (en) A kind of image cropping method, apparatus, electronic equipment
CN108076290B (en) Image processing method and mobile terminal
CN109348135A (en) Photographic method, device, storage medium and terminal device
CN108712603B (en) Image processing method and mobile terminal
CN110139033A (en) Camera control method and Related product
CN108307106B (en) Image processing method and device and mobile terminal
CN111028144B (en) Video face changing method and device and storage medium
CN110933312B (en) Photographing control method and related product
CN108833779B (en) Shooting control method and related product
CN107644396B (en) Lip color adjusting method and device
CN111223047A (en) Image display method and electronic equipment
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN110113515A (en) Camera control method and Related product
CN108848317A (en) Camera control method and Related product
CN113850726A (en) Image transformation method and device
CN111246102A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112135191A (en) Video editing method, device, terminal and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
KR102273059B1 (en) Method, apparatus and electronic device for enhancing face image
CN107995417B (en) Photographing method and mobile terminal
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant