CN108009470B - Image extraction method and device - Google Patents

Image extraction method and device Download PDF

Info

Publication number
CN108009470B
CN108009470B CN201710998445.7A CN201710998445A CN108009470B CN 108009470 B CN108009470 B CN 108009470B CN 201710998445 A CN201710998445 A CN 201710998445A CN 108009470 B CN108009470 B CN 108009470B
Authority
CN
China
Prior art keywords
contour
coordinate value
hair
point
center point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710998445.7A
Other languages
Chinese (zh)
Other versions
CN108009470A (en
Inventor
刘岱昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Landsky Network Technology Co ltd
Original Assignee
Shenzhen Landsky Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Landsky Network Technology Co ltd filed Critical Shenzhen Landsky Network Technology Co ltd
Priority to CN201710998445.7A priority Critical patent/CN108009470B/en
Publication of CN108009470A publication Critical patent/CN108009470A/en
Application granted granted Critical
Publication of CN108009470B publication Critical patent/CN108009470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for extracting an image, wherein the method comprises the following steps: the terminal firstly identifies the face outline of a target person head image; secondly, determining the hair contour of the target person head image according to the face contour; and finally, extracting image information in the target human head image according to the face contour and the hair contour, and determining a second image comprising the human head contour. The embodiment of the application is beneficial to improving the integrity and the accuracy of image extraction.

Description

Image extraction method and device
Technical Field
The invention relates to the field of computers, in particular to a method and a device for extracting an image.
Background
Nowadays, many people in society like to beautify pictures, and more beautifying tools are well known. For example, a Photoshop software tool can be used for processing pictures on a computer or an application such as a beautiful show can be used for beautifying pictures on a mobile phone.
Disclosure of Invention
The embodiment of the invention discloses a method and a device for extracting an image, which are beneficial to the integrity and the accuracy of image extraction.
The first aspect of the embodiment of the invention discloses an image method, which comprises the following steps:
identifying a face contour of a target person head image;
determining a hair contour of the target person head image according to the face contour;
and extracting image information in the target human head image according to the face contour and the hair contour, and determining a second image comprising the human head contour.
In one possible design, the facial contour comprises a coordinate value of a first human eye central point, a coordinate value of a second human eye central point, a coordinate value of a mouth central point, a coordinate value of a first zygomatic bone characteristic point and a coordinate value of a second zygomatic bone characteristic point; the determining the hair contour of the target person head image according to the face contour comprises: determining the coordinate value of the contour characteristic point of the hair contour according to the coordinate value of the center point of the first human eye, the coordinate value of the center point of the second human eye and the coordinate value of the center point of the mouth; determining the coordinate value of a first hair characteristic point and the coordinate value of a second hair characteristic point of a hair area of the target human head image according to the coordinate value of the contour characteristic point, the coordinate value of the first cheekbone characteristic point and the coordinate value of the second cheekbone characteristic point; and determining the hair contour according to the first hair characteristic point and the second hair characteristic point.
In one possible design, the determining the coordinate values of the contour feature points of the hair contour according to the coordinate values of the first eye center point, the second eye center point and the mouth center point includes: determining a coordinate value of a first reference center point between a first human eye center point and a second human eye center point according to the coordinate value of the first human eye center point and the coordinate value of the second human eye center point; determining a first distance between the first reference center point and the mouth center point according to the coordinate value of the first reference center point and the coordinate value of the mouth center point; and determining the coordinate value of the profile feature point of the hair profile according to the first distance, the coordinate value of the first reference center point and a preset relationship between the first distance and a second distance, wherein the second distance is the distance between the first reference center point and the profile feature point.
In a possible design, the determining, according to the coordinate values of the contour feature point, the coordinate values of the first zygomatic bone feature point, and the coordinate values of the second zygomatic bone feature point, the coordinate values of the first hair feature point and the coordinate values of the second hair feature point of the hair region of the target human head image includes: determining a third distance between the coordinate values of the first zygomatic bone feature point and the coordinate values of the second zygomatic bone feature point; determining coordinate values of a second reference center point between the first reference center point and the contour feature points; and determining the coordinate values of the first hair characteristic point and the second hair characteristic point of the hair area of the target person head image according to the third distance and the coordinate value of the second reference center point.
In one possible design, the extracting image information in the target head image according to the face contour and the hair contour, and determining a second image including a head contour includes: determining an image information extraction area according to the face contour and the hair contour; and extracting image information in the image information extraction area in the target human head image to obtain a second image comprising the human head outline.
In a second aspect, an embodiment of the present application provides an image extraction apparatus, which includes an identification module, a determination module, and an extraction module.
The identification module is used for identifying the face contour of the human head image;
the determining module is used for determining the hair contour of the target person head image according to the face contour.
The extracting module is used for extracting image information in the target human head image according to the face contour and the hair contour and determining a second image comprising the human head contour.
In one possible design, the determining module is specifically configured to: determining the coordinate value of the contour characteristic point of the hair contour according to the coordinate value of the center point of the first human eye, the coordinate value of the center point of the second human eye and the coordinate value of the center point of the mouth; determining the coordinate value of a first hair characteristic point and the coordinate value of a second hair characteristic point of a hair area of the target human head image according to the coordinate value of the contour characteristic point, the coordinate value of the first cheekbone characteristic point and the coordinate value of the second cheekbone characteristic point; and determining the hair contour according to the first hair characteristic point and the second hair characteristic point.
In one possible design, the determining module is specifically configured to: determining a coordinate value of a first reference center point between a first human eye center point and a second human eye center point according to the coordinate value of the first human eye center point and the coordinate value of the second human eye center point; determining a first distance between the first reference center point and the mouth center point according to the coordinate value of the first reference center point and the coordinate value of the mouth center point; and determining the coordinate value of the profile feature point of the hair profile according to the first distance, the coordinate value of the first reference center point and a preset relationship between the first distance and a second distance, wherein the second distance is the distance between the first reference center point and the profile feature point.
In one possible design, the determining module is specifically configured to: determining a third distance between the coordinate values of the first zygomatic bone feature point and the coordinate values of the second zygomatic bone feature point; determining coordinate values of a second reference center point between the first reference center point and the contour feature points; and determining the coordinate values of the first hair characteristic point and the second hair characteristic point of the hair area of the target person head image according to the third distance and the coordinate value of the second reference center point.
In one possible design, the extraction module is specifically configured to: determining an image information extraction area according to the face contour and the hair contour; and extracting image information in the image information extraction area in the target human head image to obtain a second image comprising the human head outline.
In a third aspect, an embodiment of the present application provides a terminal, including a processor and a memory; the memory stores a program, and the processor executes the program stored in the memory to perform the steps described in any one of the methods of the first aspect of the embodiments of the present application.
In a fourth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program is to make a computer perform part or all of the steps as described in any one of the methods of the first aspect of this application, and the computer includes a mobile terminal.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package, the computer comprising a mobile terminal.
It can be seen that, in the embodiment of the application, the terminal firstly identifies the face contour of the target human head image; determining a hair contour of the target person head image according to the face contour; and extracting image information in the human head image according to the face contour and the hair contour, and determining a second image comprising the human head contour. The embodiment of the application is beneficial to improving the integrity and the accuracy of image extraction.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image extraction method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an image extraction method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a human head image structure according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal disclosed in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal disclosed in the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a smart phone disclosed in the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In the embodiment of the present application, the terminal may include, but is not limited to, a smart phone, a palm computer, a notebook computer, a desktop computer, and the like. The operating system of the terminal may include, but is not limited to, an Android operating system, an IOS operating system, a Symbian operating system, a Black Berry operating system, a Windows Phone8 operating system, and the like, which is not limited in the embodiment of the present invention.
Embodiments of the present application will be described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image extraction method according to an embodiment of the present invention. As shown in fig. 1, the image extraction method may include:
s101, the terminal identifies the face contour of the target person head image.
The human head image at least comprises image information of eyes, a nose, a mouth, cheeks, a forehead, hair, ears and the like.
In a specific implementation, the terminal may identify the face contour of the head image according to a pre-stored/preset/real-time received face recognition policy, where the face recognition policy may be, for example, a face recognition policy based on LGBP or a face recognition policy based on AdaBoost, and is not limited herein.
S102, the terminal determines the hair contour of the target person head image according to the face contour.
The facial contour comprises a coordinate value of a first human eye central point, a coordinate value of a second human eye central point, a coordinate value of a mouth central point, a coordinate value of a first zygomatic bone characteristic point and a coordinate value of a second zygomatic bone characteristic point.
S103, the terminal extracts image information in the target human head image according to the face contour and the hair contour, and determines a second image comprising the human head contour.
Wherein the hair contour comprises a set of feature points of an edge of a hair region.
It can be seen that, in the embodiment of the present application, the terminal first identifies the face contour of the target person head image, then, the terminal determines the hair contour of the target person head image according to the identified face contour, and finally, the terminal extracts image information in the target person head image according to the face contour and the hair contour to determine the second image including the person head contour. Because the hair contour can be accurately determined based on the face contour, the head contour in the second image can accurately comprise the determined hair contour, the situation that the integrity of the head contour is reduced due to the fact that the hair contour cannot be accurately identified is avoided, and the accuracy and the integrity of the head contour identification are improved.
In one possible example, the facial contour includes coordinate values of a first human eye center point, a second human eye center point, a mouth center point, a first zygomatic bone feature point, and a second zygomatic bone feature point; the determining the hair contour of the target person head image according to the face contour comprises: determining the coordinate value of the contour characteristic point of the hair contour according to the coordinate value of the center point of the first human eye, the coordinate value of the center point of the second human eye and the coordinate value of the center point of the mouth; determining the coordinate value of a first hair characteristic point and the coordinate value of a second hair characteristic point of a hair area of the target human head image according to the coordinate value of the contour characteristic point, the coordinate value of the first cheekbone characteristic point and the coordinate value of the second cheekbone characteristic point; and determining the hair contour according to the first hair characteristic point and the second hair characteristic point.
The first human eye and the second human eye are respectively a left eye and a right eye of the human face. The contour feature points of the hair contour are feature points of the lower boundary of the hair contour.
Therefore, in this example, the terminal can accurately determine the coordinate values of the first hair feature point and the second hair feature point in the hair region, so that the terminal can accurately identify the hair region according to the first hair feature point and the second hair feature point, and finally determine the complete hair contour, which is beneficial to improving the accuracy and the integrity of the terminal for determining the hair contour of the target person image.
In one possible example, the determining the coordinate values of the contour feature points of the hair contour according to the coordinate values of the first eye center point, the second eye center point and the mouth center point includes: determining a coordinate value of a first reference center point between a first human eye center point and a second human eye center point according to the coordinate value of the first human eye center point and the coordinate value of the second human eye center point; determining a first distance between the reference center point and the mouth center point according to the coordinate value of the reference center point and the coordinate value of the mouth center point; and determining the coordinate value of the profile feature point of the hair profile according to the first distance, the coordinate value of the first reference center point and a preset relationship between the first distance and a second distance, wherein the second distance is the distance between the first reference center point and the profile feature point.
The first reference center point is a midpoint between the first human eye center point and the second human eye center point. The relationship between the preset first distance and the second distance is that the second distance is a preset proportional value of the first distance, and the preset proportional value may be an empirical value, such as 1/2, 3/5, which is not limited herein.
Therefore, in this example, the terminal can accurately calculate the coordinate values of the contour feature points of the hair contour according to the preset proportional value conforming to the conventional human head contour feature, the contour feature points can be further used for determining the first hair feature points and the second hair feature points in the hair region, the first hair feature points and the second hair feature points are used for determining the hair region, and the hair region is used for finally determining the hair contour, so that the accuracy of the terminal in recognizing the hair contour in the target human head image is improved.
In one possible example, the determining the coordinate values of the first hair feature point and the second hair feature point of the hair region of the target human head image according to the coordinate values of the contour feature point, the coordinate values of the first zygomatic bone feature point and the coordinate values of the second zygomatic bone feature point includes: determining a third distance between the coordinate values of the first zygomatic bone feature point and the coordinate values of the second zygomatic bone feature point; determining coordinate values of a second reference center point between the first reference center point and the contour feature points; and determining the coordinate values of the first hair characteristic point and the second hair characteristic point of the hair area of the target person head image according to the third distance and the coordinate value of the second reference center point.
The first hair characteristic point and the second hair characteristic point are respectively a characteristic point in a left hair area and a characteristic point in a right hair area in the human head outline. The first and second zygomatic bones are divided into left and right facial zygomatic bones in the facial contour. The second reference center point refers to the 1/2 coordinate point of the distance between the first reference center point and the contour feature point.
It can be seen that, in the present example, when the first cheekbone feature point and the second cheekbone feature point move up to the horizontal line where the second reference center point is located in the conventional feature relationship of the feature points of the human head contour, the feature point corresponding to the first cheekbone feature point is the first hair feature point located in the hair region, and the feature point corresponding to the second cheekbone feature point is the second hair feature point located in the hair region, so that the terminal can accurately identify the first hair feature point and the second hair feature point according to the above feature relationship.
In one possible example, the extracting image information in the target head image according to the face contour and the hair contour, and determining a second image including a head contour, includes: determining an image information extraction area according to the face contour and the hair contour; and extracting image information in the image information extraction area in the target human head image to obtain a second image comprising the human head outline.
In this example, the range of the face contour and the range of the hair contour in the human head contour are determined, so that the extraction area of the human head image can be determined, and complete image extraction is performed according to the extraction area, which is beneficial to improving the accuracy and the integrity of the image extracted by the terminal.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image extraction method provided in the embodiment of the present application, and is applied to a terminal, consistent with the embodiment shown in fig. 2. As shown in the figure, the image extraction method of the application includes:
s201, the terminal identifies the face contour of the target person head image.
S202, the terminal determines the coordinate value of a first reference center point between a first human eye center point and a second human eye center point according to the coordinate value of the first human eye center point and the coordinate value of the second human eye center point.
S203, the terminal determines a first distance between the reference center point and the mouth center point according to the coordinate value of the reference center point and the coordinate value of the mouth center point.
S204, the terminal determines the coordinate value of the contour feature point of the hair contour according to the first distance, the coordinate value of the first reference center point and the relation between the preset first distance and a second distance, wherein the second distance is the distance between the first reference center point and the contour feature point.
S205, the terminal determines a third distance between the coordinate value of the first zygomatic bone feature point and the coordinate value of the second zygomatic bone feature point.
S206, the terminal determines the coordinate value of a second reference center point between the first reference center point and the contour feature point.
And S207, the terminal determines the coordinate values of the first hair characteristic point and the second hair characteristic point of the hair area of the target person head image according to the third distance and the coordinate values of the second reference center point.
S208, the terminal determines the hair contour according to the first hair characteristic point and the second hair characteristic point.
S209, the terminal determines an image information extraction area according to the face contour and the hair contour.
S210, the terminal extracts the image information in the image information extraction area in the target human head image to obtain a second image comprising the human head outline.
It can be seen that, in the embodiment of the present application, the terminal first identifies the face contour of the target person head image, then, the terminal determines the hair contour of the target person head image according to the identified face contour, and finally, the terminal extracts image information in the target person head image according to the face contour and the hair contour to determine the second image including the person head contour. Because the hair contour can be accurately determined based on the face contour, the head contour in the second image can accurately comprise the determined hair contour, the situation that the integrity of the head contour is reduced due to the fact that the hair contour cannot be accurately identified is avoided, and the accuracy and the integrity of the head contour identification are improved.
The embodiments of the present application are further described below with reference to specific application scenarios.
As shown in FIG. 3, FIG. 3 is a contour diagram of a human head image, in which the center point of the mouth is C, the center points of both eyes are A and B, the feature points of the left and right cheekbones are D and E, and the center point of the distance between the two cheekbones is I, where the point H is the center point of the distance between the point A and the point B, the distance between the point C and the point H is a, it is known from a preset relationship that the distance a is α times the distance between the point H and the point J, to obtain the point J, and J and H are on the same vertical line, the point D and the point E are on the same horizontal line, and the distance between the point D and the point E is B, the center point of the distance between the point H and the point J is K, the distance B is translated to the same horizontal line as K to obtain the point F and the point G, and the point F, the point G and the point J are used as ranges to obtain a hair contour, and finally, a second image required by a.
The following is an embodiment of the apparatus of the present invention, which is used to perform the method implemented by the embodiment of the method of the present invention. As shown in fig. 4, the terminal may include an identification module 401, a determination module 402 and an extraction module 403, wherein
The identification module is used for identifying the face contour of the human head image;
the determining module is used for determining the hair contour of the target person head image according to the face contour;
the extracting module is used for extracting image information in the target human head image according to the face contour and the hair contour and determining a second image comprising the human head contour.
It can be seen that, in the embodiment of the application, the terminal identifies the face contour of the target human head image; determining a hair contour of the target person head image according to the face contour; and extracting image information in the target human head image according to the face contour and the hair contour, and determining a second image comprising the human head contour. Because the hair contour can be accurately determined based on the face contour, the head contour in the second image can accurately comprise the determined hair contour, the situation that the integrity of the head contour is reduced due to the fact that the hair contour cannot be accurately identified is avoided, and the accuracy and the integrity of the head contour identification are improved.
In one possible example, the instructions of the determination module are specifically configured to perform the following operations: controlling to determine the coordinate value of the contour characteristic point of the forehead contour according to the coordinate value of the center point of the first human eye, the coordinate value of the center point of the second human eye and the coordinate value of the center point of the mouth; controlling to determine the coordinate value of the first hair characteristic point and the coordinate value of the second hair characteristic point of the hair area of the target human head image according to the coordinate value of the contour characteristic point, the coordinate value of the first zygomatic bone characteristic point and the coordinate value of the second zygomatic bone characteristic point; and controlling the hair contour to be determined according to the first hair feature point and the second hair feature point.
In one possible example, the instructions of the determination module are specifically configured to perform the following operations: controlling to determine a coordinate value of a first reference center point between a first human eye center point and a second human eye center point according to the coordinate value of the first human eye center point and the coordinate value of the second human eye center point; and controlling a first distance between the first reference center point and the mouth center point to be determined according to the coordinate value of the first reference center point and the coordinate value of the mouth center point; and controlling to determine coordinate values of the contour feature points of the hair contour according to the first distance, the coordinate values of the first reference center point and a preset relationship between the first distance and a second distance, wherein the second distance is the distance between the first reference center point and the contour feature points.
In one possible example, the instructions of the determination module are specifically configured to perform the following operations: controlling to determine a third distance between the coordinate value of the first zygomatic bone feature point and the coordinate value of the second zygomatic bone feature point; and controlling to determine coordinate values of a second reference center point between the first reference center point and the contour feature point; and controlling to determine the coordinate values of the first hair characteristic point and the second hair characteristic point of the hair area of the target person head image according to the third distance and the coordinate value of the second reference center point.
In one possible example, the instructions of the extraction module are specifically configured to perform the following operations: controlling to determine an image information extraction area according to the face contour and the hair contour; and controlling to extract the image information in the image information extraction area in the target human head image so as to obtain a second image comprising the human head outline.
In accordance with the embodiments shown in fig. 2, fig. 3, and fig. 4, please refer to fig. 5, and fig. 5 is a schematic structural diagram of a terminal provided in an embodiment of the present application, where the terminal runs one or more application programs and an operating system, and as shown in the figure, the terminal includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are different from the one or more application programs, and the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps;
identifying a face contour of a target person head image;
determining a hair contour of the target person head image according to the face contour;
and extracting image information in the target human head image according to the face contour and the hair contour, and determining a second image comprising the human head contour.
It can be seen that, in the embodiment of the present application, the terminal first identifies the face contour of the target person head image, then, the terminal determines the hair contour of the target person head image according to the identified face contour, and finally, the terminal extracts image information in the target person head image according to the face contour and the hair contour to determine the second image including the person head contour. Because the hair contour can be accurately determined based on the face contour, the head contour in the second image can accurately comprise the determined hair contour, the situation that the integrity of the head contour is reduced due to the fact that the hair contour cannot be accurately identified is avoided, and the accuracy and the integrity of the head contour identification are improved.
The scheme of the embodiment of the application is introduced mainly from the perspective of the method-side implementation process. It is understood that the terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to realize the functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a smart phone 600 according to an embodiment of the present application, where the smart phone 600 includes: casing 610, touch-control display screen 620, mainboard 630, battery 640 and subplate 650, be provided with leading camera 631, treater 632, memory 633, power management chip 634 etc. on the mainboard 630, be provided with oscillator 651 on the subplate, integrative sound chamber 652, VOOC flash interface 653 and fingerprint identification module 654.
The terminal firstly identifies the face outline of a target person head image; determining a hair contour of the target person head image according to the face contour; and extracting image information in the target human head image according to the face contour and the hair contour, and determining a second image comprising the human head contour.
The processor 632 is a control center of the smart phone, and is connected to various parts of the whole smart phone through various interfaces and lines, and executes various functions and processes data of the smart phone by running or executing software programs and/or modules stored in the memory 633 and calling data stored in the memory 633, thereby integrally monitoring the smart phone. Optionally, processor 632 may include one or more processing units; preferably, the processor 632 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 632. The Processor 632 may be, for example, a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor described above may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs and microprocessors, and the like.
The memory 633 can be used for storing software programs and modules, and the processor 632 executes various functional applications and data processing of the smart phone by running the software programs and modules stored in the memory 633. The memory 633 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the smartphone, and the like. Further, the memory 633 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. The Memory 633 may be, for example, a Random Access Memory (RAM), a flash Memory, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a register, a hard disk, a removable hard disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the method embodiments, and the computer includes a mobile terminal.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as recited in the method embodiments. The computer program product may be a software installation package, said computer comprising a mobile terminal.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments described herein may be performed by associated hardware as instructed by a program, which may be stored in a computer readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (6)

1. A method of image extraction, the method comprising:
identifying a face contour of a target person head image;
determining a hair contour of the target person head image according to the face contour;
extracting image information in the target human head image according to the face contour and the hair contour, and determining a second image comprising a human head contour;
the facial contour comprises a coordinate value of a first human eye central point, a coordinate value of a second human eye central point, a coordinate value of a mouth central point, a coordinate value of a first zygomatic bone characteristic point and a coordinate value of a second zygomatic bone characteristic point; the determining the hair contour of the target person head image according to the face contour comprises:
determining the coordinate value of the contour characteristic point of the hair contour according to the coordinate value of the center point of the first human eye, the coordinate value of the center point of the second human eye and the coordinate value of the center point of the mouth;
determining the coordinate value of a first hair characteristic point and the coordinate value of a second hair characteristic point of a hair area of the target human head image according to the coordinate value of the contour characteristic point, the coordinate value of the first cheekbone characteristic point and the coordinate value of the second cheekbone characteristic point;
determining the hair contour according to the first hair feature point and the second hair feature point;
the determining the coordinate value of the first hair characteristic point and the coordinate value of the second hair characteristic point of the hair area of the target human head image according to the coordinate value of the contour characteristic point, the coordinate value of the first zygomatic bone characteristic point and the coordinate value of the second zygomatic bone characteristic point comprises the following steps:
determining a third distance between the coordinate values of the first zygomatic bone feature point and the coordinate values of the second zygomatic bone feature point;
determining a coordinate value of a first reference center point between a first human eye center point and a second human eye center point according to the coordinate value of the first human eye center point and the coordinate value of the second human eye center point;
determining coordinate values of a second reference center point between the first reference center point and the contour feature points;
and determining the coordinate values of the first hair characteristic point and the second hair characteristic point of the hair area of the target person head image according to the third distance and the coordinate value of the second reference center point.
2. The method according to claim 1, wherein determining the coordinate values of the contour feature points of the hair contour based on the coordinate values of the first eye center point, the second eye center point and the mouth center point comprises:
determining a first distance between the first reference center point and the mouth center point according to the coordinate value of the first reference center point and the coordinate value of the mouth center point;
and determining the coordinate value of the profile feature point of the hair profile according to the first distance, the coordinate value of the first reference center point and a preset relationship between the first distance and a second distance, wherein the second distance is the distance between the first reference center point and the profile feature point.
3. The method according to claim 1 or 2, wherein the extracting image information in the target head image according to the face contour and the hair contour, and determining a second image including a head contour, comprises:
determining an image information extraction area according to the face contour and the hair contour;
and extracting image information in the image information extraction area in the target human head image to obtain a second image comprising the human head outline.
4. An image extraction device characterized by comprising:
the recognition module is used for recognizing the facial contour of the target human head image;
the determining module is used for determining the hair contour of the target person head image according to the face contour;
the extracting module is used for extracting image information in the target human head image according to the face contour and the hair contour and determining a second image comprising a human head contour;
the determining module is specifically configured to:
determining the coordinate value of the contour characteristic point of the hair contour according to the coordinate value of the center point of the first human eye, the coordinate value of the center point of the second human eye and the coordinate value of the center point of the mouth;
determining the coordinate value of a first hair characteristic point and the coordinate value of a second hair characteristic point of a hair area of the target human head image according to the coordinate value of the contour characteristic point, the coordinate value of the first cheekbone characteristic point and the coordinate value of the second cheekbone characteristic point;
determining the hair contour according to the first hair feature point and the second hair feature point;
the determining module is specifically configured to:
determining a third distance between the coordinate values of the first zygomatic bone feature point and the coordinate values of the second zygomatic bone feature point;
determining a coordinate value of a first reference center point between a first human eye center point and a second human eye center point according to the coordinate value of the first human eye center point and the coordinate value of the second human eye center point;
determining coordinate values of a second reference center point between the first reference center point and the contour feature points;
and determining the coordinate values of the first hair characteristic point and the second hair characteristic point of the hair area of the target person head image according to the third distance and the coordinate value of the second reference center point.
5. The apparatus of claim 4, wherein the determining module is specifically configured to:
determining a first distance between the first reference center point and the mouth center point according to the coordinate value of the first reference center point and the coordinate value of the mouth center point;
and determining the coordinate value of the profile feature point of the hair profile according to the first distance, the coordinate value of the first reference center point and a preset relationship between the first distance and a second distance, wherein the second distance is the distance between the first reference center point and the profile feature point.
6. The apparatus according to any one of claims 4 or 5, wherein the extraction module is specifically configured to:
determining an image information extraction area according to the face contour and the hair contour;
and extracting image information in the image information extraction area in the target human head image to obtain a second image comprising the human head outline.
CN201710998445.7A 2017-10-20 2017-10-20 Image extraction method and device Active CN108009470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710998445.7A CN108009470B (en) 2017-10-20 2017-10-20 Image extraction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710998445.7A CN108009470B (en) 2017-10-20 2017-10-20 Image extraction method and device

Publications (2)

Publication Number Publication Date
CN108009470A CN108009470A (en) 2018-05-08
CN108009470B true CN108009470B (en) 2020-06-16

Family

ID=62051789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710998445.7A Active CN108009470B (en) 2017-10-20 2017-10-20 Image extraction method and device

Country Status (1)

Country Link
CN (1) CN108009470B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363107A (en) * 2019-06-26 2019-10-22 成都品果科技有限公司 Face forehead point Quick Extended method, apparatus, storage medium and processor
CN110458855B (en) * 2019-07-08 2022-04-05 安徽淘云科技股份有限公司 Image extraction method and related product
CN111080754B (en) * 2019-12-12 2023-08-11 广东智媒云图科技股份有限公司 Character animation production method and device for connecting characteristic points of head and limbs
CN111563898B (en) * 2020-04-29 2023-05-16 万翼科技有限公司 Image segmentation method, electronic equipment and related products
CN113255561B (en) * 2021-06-10 2021-11-02 平安科技(深圳)有限公司 Hair information identification method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1453002A2 (en) * 2003-02-28 2004-09-01 Eastman Kodak Company Enhancing portrait images that are processed in a batch mode
JP4076777B2 (en) * 2002-03-06 2008-04-16 三菱電機株式会社 Face area extraction device
CN101404910A (en) * 2006-03-23 2009-04-08 花王株式会社 Hair style simulation image creating method
CN102214361A (en) * 2010-04-09 2011-10-12 索尼公司 Information processing device, method, and program
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN103679767A (en) * 2012-08-30 2014-03-26 卡西欧计算机株式会社 Image generation apparatus and image generation method
CN106446781A (en) * 2016-08-29 2017-02-22 厦门美图之家科技有限公司 Face image processing method and face image processing device
CN107316333A (en) * 2017-07-07 2017-11-03 华南理工大学 It is a kind of to automatically generate the method for day overflowing portrait

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4076777B2 (en) * 2002-03-06 2008-04-16 三菱電機株式会社 Face area extraction device
EP1453002A2 (en) * 2003-02-28 2004-09-01 Eastman Kodak Company Enhancing portrait images that are processed in a batch mode
CN101404910A (en) * 2006-03-23 2009-04-08 花王株式会社 Hair style simulation image creating method
CN102214361A (en) * 2010-04-09 2011-10-12 索尼公司 Information processing device, method, and program
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN103679767A (en) * 2012-08-30 2014-03-26 卡西欧计算机株式会社 Image generation apparatus and image generation method
CN106446781A (en) * 2016-08-29 2017-02-22 厦门美图之家科技有限公司 Face image processing method and face image processing device
CN107316333A (en) * 2017-07-07 2017-11-03 华南理工大学 It is a kind of to automatically generate the method for day overflowing portrait

Also Published As

Publication number Publication date
CN108009470A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN108009470B (en) Image extraction method and device
US10990803B2 (en) Key point positioning method, terminal, and computer storage medium
US10650259B2 (en) Human face recognition method and recognition system based on lip movement information and voice information
US10599914B2 (en) Method and apparatus for human face image processing
KR102077198B1 (en) Facial verification method and electronic device
EP3136274B1 (en) Method and device for distributing user authorities
CN107451453B (en) Unlocking control method and related product
CN107749062B (en) Image processing method and device
EP3273388A1 (en) Image information recognition processing method and device, and computer storage medium
CN108596079B (en) Gesture recognition method and device and electronic equipment
JP2017527894A (en) Managing user identification registration using handwriting
JP2019527868A (en) Biological feature identification apparatus and method, and biological feature template registration method
CN107291238B (en) Data processing method and device
WO2021012513A1 (en) Gesture operation method and apparatus, and computer device
CN109740511B (en) Facial expression matching method, device, equipment and storage medium
CN109711287B (en) Face acquisition method and related product
US20200204365A1 (en) Apparatus, system and method for application-specific biometric processing in a computer system
CN107563338B (en) Face detection method and related product
CN104346547A (en) Intelligent identity identification system
CN109992681B (en) Data fusion method and related product
EP3748980A1 (en) Interactive method and apparatus based on user action information, and electronic device
CN112561457A (en) Talent recruitment method based on face recognition, terminal server and storage medium
CN107562199B (en) Page object setting method and device, electronic equipment and storage medium
CN108288023B (en) Face recognition method and device
CN115588225A (en) Safety protection method, device and medium for identifying user based on intelligent camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant