CN108376255A - A kind of image processing method, device and storage medium - Google Patents
A kind of image processing method, device and storage medium Download PDFInfo
- Publication number
- CN108376255A CN108376255A CN201810275707.1A CN201810275707A CN108376255A CN 108376255 A CN108376255 A CN 108376255A CN 201810275707 A CN201810275707 A CN 201810275707A CN 108376255 A CN108376255 A CN 108376255A
- Authority
- CN
- China
- Prior art keywords
- image
- portrait
- eyes
- vertex
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The embodiment of the invention discloses a kind of image processing method, device and storage medium, for the embodiment of the present invention by obtaining pending image, described image includes portrait;Obtain the location information of eyes in the portrait;Include the target area of the portrait upper part of the body according to the determination of the location information of the eyes;The target area is extracted from described image, obtains target image.The program realizes the target image for obtaining automatically and including the portrait upper part of the body, is obtained either manually or by crop tool without user, avoids the influence of artificial subjective factor, improves the convenience and reliability of image acquisition.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of image processing method, device and storage medium.
Background technology
With constantly universal and terminal technology the rapid development of terminal, the function that terminal can be realized is more and more richer
Richness is handled image according to the demand of oneself for example, user can utilize in terminal epigraph processing function accordingly.
In the prior art, when needing to handle image, user needs to open the correlation of image procossing in terminal
Using then clicking the tool applied and provided in boundary manually according to the demand of oneself using the pending image of interior load at this
Image is handled accordingly.For example, including user's upper half when needing to intercept out from the image including user's whole body only
When the image of body, i.e., intercepting out half body from whole body photograph and shine, user can load whole body photograph in the related application of image procossing,
Then it selects crop tool in menu bar, and crop tool is moved in whole body shines, according to the subjective judgement of oneself vision,
The operations such as reduced, amplified or moved to clipping region manually, so that in clipping region above the waist comprising user.In determination
Behind clipping region, user needs manually to come out the image zooming-out in clipping region, and is preserved, so as to obtain half body
According to.Wherein, for the half body according to being the image for wrapping user's upper part of the body, this can be the head to the area between chest of user above the waist
Domain.
In the research and practice process to the prior art, it was found by the inventors of the present invention that due to obtaining what half body shone
In the process, need user according to the subjective judgement of oneself vision, and clipping region is reduced either manually or by crop tool,
The operations such as amplification or movement, therefore, the operating procedure is comparatively laborious, not convenient enough, and the influence of artificial subjective factor, causes
Obtained half body is according to may have a deviation, such as the size shone of half body is undesirable or face is not placed in the middle etc., to reduce
The reliability of image procossing.
Invention content
A kind of image processing method of offer of the embodiment of the present invention, device and storage medium, it is intended to improve image and obtain just
Victory and reliability.
In order to solve the above technical problems, the embodiment of the present invention provides following technical scheme:
A kind of image processing method, including:
Pending image is obtained, described image includes portrait;
Obtain the location information of eyes in the portrait;
Include the target area of the portrait upper part of the body according to the determination of the location information of the eyes;
The target area is extracted from described image, obtains target image.
A kind of image processing apparatus, including:
Image acquisition unit, for obtaining pending image, described image includes portrait;
Position acquisition unit, the location information for obtaining eyes in the portrait;
Determination unit, for including the target area of the portrait upper part of the body according to the determination of the location information of the eyes;
Extraction unit obtains target image for extracting the target area from described image.
A kind of storage medium, the storage medium are stored with a plurality of instruction, and described instruction is loaded suitable for processor, with
Execute the step in above-mentioned image processing method.
The embodiment of the present invention can obtain the pending image including portrait, and obtain the position letter of eyes in portrait
Breath;Then can determine that the image includes the target area of the portrait upper part of the body according to the location information of eyes, so as to from
Target area is extracted in the image, obtains target image.The program realizes the target for obtaining automatically and including the portrait upper part of the body
Image is obtained without user either manually or by crop tool, avoids the influence of artificial subjective factor, is improved image and is obtained
The convenience and reliability taken.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is the schematic diagram of a scenario of image processing system provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram that the location information provided in an embodiment of the present invention according to eyes determines predeterminable area;
Fig. 4 is predeterminable area movement provided in an embodiment of the present invention and widened schematic diagram;
Fig. 5 is provided in an embodiment of the present invention to obtain the schematic diagram of target image using image processing method;
Fig. 6 is provided in an embodiment of the present invention when pending image includes multiple portraits, obtains target image
Schematic diagram;
Fig. 7 is another flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 8 is provided in an embodiment of the present invention to obtain another schematic diagram of target image using image processing method;
Fig. 9 is the structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Figure 10 is another structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Figure 11 is another structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Figure 12 is the structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, the every other implementation that those skilled in the art are obtained without creative efforts
Example, shall fall within the protection scope of the present invention.
A kind of image processing method of offer of the embodiment of the present invention, device and storage medium.
Referring to Fig. 1, the schematic diagram of a scenario for the image processing system that Fig. 1 is provided by the embodiment of the present invention, at the image
Reason system may include image processing apparatus, which can specifically be integrated in tablet computer, mobile phone, notebook electricity
Brain and camera etc. have storage element and are equipped with microprocessor in the terminal with operational capability, are mainly used for obtaining
Pending image, wherein may include portrait in pending image;The acquisition modes of the image can be from server or
Terminal local obtains, and can also be to be collected by the camera of terminal preset.It, can be with after obtaining the image comprising portrait
Obtain the location information of the eyes of portrait in the image, the location information can be the center of eyes pixel coordinate position or
The two-dimensional coordinate position at the center of eyes etc..It is then possible to include the portrait upper part of the body according to the location information determination according to eyes
Target area for example, the distance between eyes can be obtained according to the location information of eyes, and obtains the midpoint between eyes
Location information determines target area according to obtained distance and point midway information.It, can be from image after obtaining target area
In extract the target area, obtain include the portrait upper part of the body target image;Etc..
In addition, the image processing system can also include server, which is mainly used for receiving the figure that terminal is sent
It as obtaining request, and is obtained based on the image received and asks to obtain image from the database of storage image, returned to terminal
The image, the image are pending image.After the target image that terminal obtains, which can also receive in terminal
The target image of biography stores the target image, and when can need target image with word terminal, the mesh is sent to terminal
Logo image.
It should be noted that the schematic diagram of a scenario of image processing system shown in FIG. 1 is only an example, the present invention is real
The image processing system and scene for applying example description are in order to more clearly illustrate the technical solution of the embodiment of the present invention, not
The restriction for technical solution provided in an embodiment of the present invention is constituted, those of ordinary skill in the art are it is found that with image procossing
The appearance of the differentiation and new business scene of system, technical solution provided in an embodiment of the present invention is for similar technical problem, together
Sample is applicable in.
It is described in detail separately below.
In the present embodiment, it will be described from the angle of image processing apparatus, which can specifically collect
At tablet computer, mobile phone, laptop and camera etc. have storage element and microprocessor is installed and have operation
In the terminal of ability.
A kind of image processing method, including:Pending image is obtained, image includes portrait;Obtain eyes in portrait
Location information;Include the target area of the portrait upper part of the body according to the determination of the location information of eyes;Target is extracted from image
Region obtains target image.
Referring to Fig. 2, Fig. 2 is the flow diagram for the image processing method that one embodiment of the invention provides.At the image
Reason method may include:
In step S101, pending image is obtained, which includes portrait.
Wherein, it may include portrait in pending image, can also include other objects etc., particular content is here
It is not construed as limiting.The portrait can be complete humanoid (i.e. people's limbs whole body), can also be the humanoid of part of limb that be blocked
Deng the portrait that the image includes can be one or more.
The mode for obtaining pending image may include:Mode one, image processing apparatus can be inquired locally to be deposited in advance
The memory for storing up image obtains image from the local memory, obtains pending image.Mode two, image processing apparatus can
Request is obtained to send image to server, so that server obtains request to image processing apparatus based on the image received
The image got from the database of storage image is returned to, image processing apparatus can receive the image of server return, obtain
To pending image.Mode three, image processing apparatus can open preset camera, by the camera collection image,
Obtain pending image.It is understood that pending image can also be obtained by other means, particular content is herein
Place is not construed as limiting.
In step s 102, the location information of eyes in portrait is obtained.
After obtaining pending image, image processing apparatus can obtain the location information of the eyes of portrait in image,
Wherein, eyes include left eye eyeball and right eye eyeball, and the location information of the eyes may include pixel coordinate position and the right side of left eye eyeball
The pixel coordinate position of eyes, either, the location information of the eyes may include two-dimensional coordinate position and the right eye of left eye eyeball
The two-dimensional coordinate position of eyeball.
When only including a portrait in image, the location information of eyes in the portrait can be obtained;When only being wrapped in image
When including multiple portraits, the location information of eyes in each portrait can be obtained respectively.
In some embodiments, may include the step of the location information of eyes in acquisition portrait:It is established in image
Two-dimensional Cartesian coordinate system;Obtain the pixel at the center of eyes in portrait;Determine pixel two-dimensional Cartesian coordinate system two dimension
Coordinate position obtains the location information of eyes.
Specifically, image processing apparatus can establish two-dimentional rectangular co-ordinate in image by coordinate origin of any one position
System, for example, can be using the center o of image as coordinate origin two-dimensional Cartesian coordinate system xoy, the horizontal seat of the two-dimensional Cartesian coordinate system
The maximum unit division of mark x and ordinate y can be flexibly arranged according to actual needs.Establishing two-dimensional Cartesian coordinate system
Afterwards, image processing apparatus can obtain the pixel at the center of eyes in portrait, that is, obtain pixel and the right side of left eye center
Then the pixel of eye center can determine two-dimensional coordinate position of the pixel of right eye center in two-dimensional Cartesian coordinate system
It sets, obtains the location information of eyes.
In some embodiments, may include the step of the location information of eyes in acquisition portrait:It is established in image
Pixel coordinate system;Obtain the pixel at the center of eyes in portrait;It determines coordinate of the pixel in pixel coordinate system, obtains double
The location information of eye.
Specifically, image processing apparatus can establish pixel coordinate system using any one position in image as coordinate origin,
For example, the pixel coordinate system u-v as unit of pixel can be established using the image upper left corner as coordinate origin, pixel coordinate system
Abscissa u and ordinate v is the columns and place line number where in the array of pixels of the image respectively.In another example Ke Yishi
Region in other image where the face of portrait, built-in vertical pixel coordinate system etc. in the region where face.
After establishing pixel coordinate system, image processing apparatus can obtain the pixel at the center of eyes in portrait, that is, obtain
Then the pixel for taking the pixel and right eye center of left eye center can determine the pixel of left eye center in the picture
Coordinate (i.e. pixel coordinate position) in plain coordinate system, and determine seat of the pixel of right eye center in pixel coordinate system
Mark, the corresponding coordinate of left eye eyeball and the corresponding coordinate of right eye eyeball are the location information of eyes.
In some embodiments, may include the step of the location information of eyes in acquisition portrait:Pass through Face datection
Algorithm (the entitled FaceDetector of English) obtains the location information of eyes in portrait, wherein FaceDetector algorithms are treated
After the image of processing is detected, the location information of eyes in portrait can be obtained, the location information of the eyes can be with number
The form storage of group, which can be two-dimensional array, and logarithm and location information of eyes etc. can be stored in the two-dimensional array.
It should be noted that the acquisition modes of the location information of eyes can also be that other acquisition modes, particular content exist
It is not construed as limiting herein.
In step s 103, include the target area of the portrait upper part of the body according to the determination of the location information of eyes.
After obtaining the location information of eyes, image processing apparatus can determine target area according to the location information of eyes
Domain, the target area include the portrait upper part of the body, wherein and portrait can be the head to the region between chest of portrait above the waist,
Certain portrait can also be flexibly arranged according to actual needs above the waist, and particular content is not construed as limiting here.Wherein, target
The shapes and sizes in region etc. are vertical to be flexibly arranged according to actual needs, for example, the target area can be square, square
Shape, heart, circle, diamond shape, ellipse or hexagon etc..When only getting the location information of a pair of of eyes, it may be determined that packet
Include the target area that eyes are corresponded to the portrait upper part of the body;When getting the location information of multipair eyes, it may be determined that including
Each pair of eyes correspond to the target area of the portrait upper part of the body, so as to obtain multiple target areas.
In some embodiments, include the steps that the target area of the portrait upper part of the body according to the determination of the location information of eyes
May include:The point midway information between the distance between eyes and eyes is obtained according to the location information of eyes;According to
Distance and the determination of point midway information include the target area of the portrait upper part of the body.
Specifically, image processing apparatus can calculate left according to the location information of the location information and right eye eyeball of left eye eyeball
The distance between eyes and right eye eyeball, and location information and right eye eyeball according to left eye eyeball location information, calculate left eye eyeball
Point midway information between right eye eyeball.
For example, it is assumed that the location information of left eye eyeball is A (a1, a2), the location information of right eye eyeball is B (b1, b2), is calculated left
The formula of the distance between eyes and right eye eyeball d can following formula (1):
The formula for calculating the point midway information p between left eye eyeball and right eye eyeball can following formula (2):
Point midway information p between obtaining distance d and left eye eyeball and the right eye eyeball between left eye eyeball and right eye eyeball
Afterwards, image processing apparatus can include the target area of the portrait upper part of the body according to distance d and point midway information p determinations.
In some embodiments, include the steps that the target area of the upper part of the body according to distance and the determination of point midway information
May include:Predeterminable area is determined according to distance and point midway information;Predeterminable area is pre- to the movement of the leg direction of portrait
If distance;It is fixed point with the center of the predeterminable area after movement, the length of side of the predeterminable area after movement is expanded as original pre-
If multiple, obtain include the portrait upper part of the body target area.
Specifically, image processing apparatus can be according to the distance between obtained left eye eyeball and right eye eyeball and left eye eyeball
Point midway information between right eye eyeball determines predeterminable area, wherein shapes and sizes of predeterminable area etc. can be according to reality
Border need flexibly be arranged, for example, the predeterminable area can be square, rectangle, heart, circle, diamond shape, ellipse or
Hexagon etc..
Predeterminable area determine mode may include:Mode one, according to the distance between left eye eyeball and right eye eyeball and a left side
Point midway information between eyes and right eye eyeball determines two vertex, then draws one four according to the two obtained vertex
The regions such as side shape or circle, obtain predeterminable area.Mode two, using the point midway between left eye eyeball and right eye eyeball as the center of circle, and
With the distance between left eye eyeball and right eye eyeball for diameter, a circle is drawn, then draws one as inscribed circle using obtained circle
A quadrilateral area, obtains predeterminable area.It should be noted that the mode that predeterminable area determines can also be other modes, have
Hold in vivo and is not construed as limiting here.
After determining predeterminable area, image processing apparatus can by predeterminable area to the movement of the leg direction of portrait it is default away from
From so that the predeterminable area is moved to face center, wherein the pre-determined distance can flexibly be set according to actual needs
It sets, for example, as shown in figure 4, predeterminable area can be moved to the 10% of entire portrait length to the leg direction of portrait.
Then, with the center of the predeterminable area after movement be fixed point, for example, can calculate predeterminable area midpoint (CX,
CY), the length of side of the predeterminable area after movement is expanded as to original preset multiple, obtain include the portrait upper part of the body target area
The midpoint in domain, the target area is (CX, CY).Wherein, preset multiple can be flexibly arranged according to actual needs, for example,
As shown in figure 4, the ratio that can be obtained above the waist with face from Leonardesque perfect human body proportion is probably 3.6:1, in
The length of side that point (CX, CY) extends predeterminable area is original 3.6 times, obtains target area.
In some embodiments, according to distance and point midway information determine predeterminable area the step of include:According to away from
From and point midway information, obtain the first vertex coordinate position;According to distance and point midway information, the second vertex is obtained
Coordinate position;Predeterminable area is determined according to the coordinate position of the coordinate position on the first vertex and the second vertex, the second vertex and the
One vertex is on the same diagonal line of predeterminable area.
Specifically, image processing apparatus can be according to the distance between left eye eyeball and right eye eyeball and left eye eyeball and right eye
Point midway information between eyeball obtains the coordinate position L (left, top) on the first vertex.
Optionally, according to distance and point midway information, obtain the first vertex coordinate position the step of may include:It obtains
The X-coordinate value in point midway information and the difference between distance are taken, the X-coordinate value on the first vertex is obtained;Obtain point midway
The difference between Y-coordinate value and distance in information, obtains the Y-coordinate value on the first vertex;According to the X-coordinate value on the first vertex and
Y-coordinate value determines the coordinate position on the first vertex.
For example, as shown in figure 3, centre bit confidence between the X-coordinate value left=left eyes eyeball and right eye eyeball on the first vertex
X-coordinate value-the distance between left eye eyeball and right eye eyeball in breath, the Y-coordinate value top=left eyes eyeball and right eye eyeball on the first vertex
Between center location information in Y-coordinate value-the distance between left eye eyeball and right eye eyeball, you can obtain the first vertex L's
Coordinate position L (left, top).
And it can be according to the middle point between the distance between left eye eyeball and right eye eyeball and left eye eyeball and right eye eyeball
Confidence ceases, and obtains the coordinate position R (right, bottom) on the second vertex.
Optionally, according to distance and point midway information, obtain the second vertex coordinate position the step of may include:It obtains
The X-coordinate value in point midway information and the accumulated value between distance are taken, the X-coordinate value on the second vertex is obtained;Point in acquisition
The accumulated value between Y-coordinate value and distance in confidence breath, obtains the Y-coordinate value on the second vertex;According to the X-coordinate on the second vertex
Value and Y-coordinate value determine the coordinate position on the second vertex.
For example, as shown in figure 3, centre bit confidence between the X-coordinate value right=left eyes eyeball and right eye eyeball on the second vertex
X-coordinate value+the distance between left eye eyeball and right eye eyeball in breath, the Y-coordinate value bottom=left eyes eyeball on the second vertex and the right side
Y-coordinate value+the distance between left eye eyeball and right eye eyeball in center location information between eyes, you can obtain the first vertex
The coordinate position L (right, bottom) of L.
After the coordinate position of the coordinate position and the second vertex that obtain the first vertex, which can with the first vertex
With on same diagonal line, i.e., on the same diagonal line of predeterminable area, at this time can according to the coordinate position on the first vertex and
The coordinate position on the second vertex determines predeterminable area, for example, as shown in figure 3, drawing one four with the second vertex and the first vertex
Side shape.
In step S104, target area is extracted from image, obtains target image.
After obtaining target area, image processing apparatus can extract the target area from image, so as to
To target image, since the target area includes the portrait upper part of the body, obtained target image includes portrait upper half
Body.For example, as shown in figure 5, image processing apparatus can receive image input instruction, and based on the image input instruction selection one
Pending image is opened, when the first image in pending image such as Fig. 5 (a), image processing apparatus can determine such as
Target area in Fig. 5 (b) extracts target area from pending image, you can obtain include the portrait upper part of the body mesh
Logo image.It should be noted that after obtaining target image, image processing apparatus can be preserved or be shared to target image
Deng can also be that target image adds that shakes, mirror image, evil spirit repeats and the special efficacys such as amplification, or target image addition is literary
Word or textures etc..
In some embodiments, may include the step of the location information of eyes in acquisition portrait:When pending figure
When as including multiple portraits, the location information of the corresponding multipair eyes of multiple portraits is obtained.
The step of extracting target area from image, obtaining target image may include:When target area includes multiple
When, each target area is extracted respectively, obtains multiple target images;Alternatively, extracting a target area at random, one is obtained
Open target image.
Specifically, may include one or more portraits in pending image, when pending image includes multiple
When portrait, image processing apparatus can obtain the location information of the corresponding multipair eyes of multiple portraits, can be according to each pair of eyes
Location information determine include the portrait upper part of the body target area, obtain multiple target areas.At this point, when target area includes more
When a, image processing apparatus can extract each target area respectively, obtain multiple target images;Alternatively, image procossing fills
A target area can be extracted at random by setting, and obtain a target image;Alternatively, image processing apparatus can be according to default rule
A target area is then extracted, one or more target image is obtained, which can be set according to actual needs
It sets, particular content is not construed as limiting here.
As shown in fig. 6, when pending image includes 2 portraits, according to the corresponding two pairs of eyes of this 2 portraits
Location information can determine that 2 target areas, image processing apparatus can extract this 2 target areas, obtain 2 respectively
Target image;Alternatively, a target area can be extracted at random, a target image is obtained;Alternatively, can be by 2 targets
One target area of region merging technique, obtains a target image, may include the upper part of the body of the two portraits in the target image.
In some embodiments, may include the step of the location information of eyes in acquisition portrait:When pending figure
When as including multiple portraits, the location information of the corresponding multipair eyes of multiple portraits is obtained.
The step of extracting target area from image, obtaining target image may include:When target area includes multiple
When, prompt message is exported, and selection instruction input by user is received based on prompt message;Wherein one is extracted according to selection instruction
A target area obtains a target image;Alternatively, extracting multiple target areas according to selection instruction, multiple targets are obtained
Image.
Specifically, may include one or more portraits in pending image, when pending image includes multiple
When portrait, image processing apparatus can obtain the location information of the corresponding multipair eyes of multiple portraits, can be according to each pair of eyes
Location information determine include the portrait upper part of the body target area, obtain multiple target areas.At this point, when target area includes more
When a, image processing apparatus can export prompt message, and corresponding operation is made so that user checks after the prompt message.This is carried
Show that information can be the relevant informations such as output " the half body photograph that please select to need to make " in the form of dialog box, or is broadcast with voice
Relevant informations such as the form output " the half body photograph for needing to make please be select " of report, etc..Image processing apparatus can be based on prompt
Information receives selection instruction input by user, and one of target area then can be extracted according to selection instruction, obtains one
Open target image;Alternatively, extracting multiple target areas according to selection instruction, multiple target images are obtained.
From the foregoing, it will be observed that the embodiment of the present invention can obtain the pending image including portrait, and obtain double in portrait
The location information of eye;Then it can determine that the image includes the target area of the portrait upper part of the body according to the location information of eyes,
So as to extract target area from the image, target image is obtained.The program realizes automatic obtain comprising on portrait
The target image of half body is obtained without user either manually or by crop tool, is avoided the influence of artificial subjective factor, is carried
The convenience and reliability that high image obtains.
According to method described in above-described embodiment, citing is described in further detail below.
The present embodiment is right by taking pending image includes a portrait as an example by taking image processing apparatus is terminal as an example
Image processing method is described in detail.It is understood that the present embodiment is not only to facilitate description institute illustrated example, is answered
It is not understood as being defined the portrait number that pending image includes, but no matter pending image includes how many
The process of portrait, image procossing is all similar, can be understood according to the example.
Referring to Fig. 7, Fig. 7 is the flow diagram of image processing method provided in an embodiment of the present invention.This method flow
May include:
S201, terminal obtain the pending image for including portrait.
The pending image includes portrait, can also include other objects etc., particular content does not limit here
It is fixed.Terminal can inquire the memory that terminal local prestores image, obtain image from the local memory, obtain pending
Image.Alternatively, terminal can send image to server obtains request, so that server is obtained based on the image received
It asks to return to the image got from the database of storage image to terminal, terminal can receive the image of server return,
Obtain pending image.Alternatively, terminal can open preset camera, by the camera collection image, obtain waiting locating
The image of reason.It is understood that pending image can also be obtained by other means, particular content does not limit here
It is fixed.
S202, terminal are in the built-in vertical pixel coordinate system of image, and obtain the pixel at the center of eyes in portrait.
After obtaining pending image, terminal can establish pixel using any one position in the image as coordinate origin
Coordinate system, for example, the pixel coordinate system as unit of pixel, pixel coordinate can be established using the image upper left corner as coordinate origin
The abscissa of system can be the columns where in the array of pixels of the image, and the ordinate of the pixel coordinate system can be at this
Line number where in the array of pixels of image.Terminal also needs to obtain the pixel at the center of eyes in portrait, that is, obtains left eye
The pixel of the pixel at eyeball center and right eye center.
S203, terminal determine coordinate of the pixel in pixel coordinate system, obtain the location information of eyes.
After establishing pixel coordinate system, and the pixel of the left eye center of acquisition and the pixel of right eye center, eventually
End can determine coordinate of the pixel of left eye center in the pixel coordinate system, and determine the pixel of right eye center
Coordinate in pixel coordinate system, the corresponding coordinate of left eye eyeball and the corresponding coordinate of right eye eyeball are the location information of eyes.
S204, terminal obtain the point midway between the distance between eyes and eyes according to the location information of eyes
Information.
After obtaining the location information of eyes, terminal can be believed according to the location information of left eye eyeball and the position of right eye eyeball
Breath calculates the distance between left eye eyeball and right eye eyeball, and location information and right eye according to left eye eyeball according to above-mentioned formula (1)
The location information of eyeball calculates the point midway information between left eye eyeball and right eye eyeball according to above-mentioned formula (2).
S205, terminal determine predeterminable area according to distance and point midway information, and by predeterminable area to the leg of portrait
Move pre-determined distance in direction.
Point midway information between obtaining distance and left eye eyeball and the right eye eyeball between left eye eyeball and right eye eyeball
Afterwards, terminal can be according to the middle point between the distance between obtained left eye eyeball and right eye eyeball and left eye eyeball and right eye eyeball
Confidence breath determines predeterminable area, wherein shapes and sizes of predeterminable area etc. can be flexibly arranged according to actual needs.
For example, terminal can be according between the distance between left eye eyeball and right eye eyeball and left eye eyeball and right eye eyeball
Point midway information, obtain the first vertex coordinate position;And terminal according to the distance between left eye eyeball and right eye eyeball,
And the point midway information between left eye eyeball and right eye eyeball, obtain the coordinate position on the second vertex.Obtaining the first vertex
After coordinate position and the coordinate position on the second vertex, which can be on same diagonal line, i.e., pre- with the first vertex
If on the same diagonal line in region, can be determined at this time according to the coordinate position on the first vertex and the coordinate position on the second vertex pre-
If region.
After determining predeterminable area, predeterminable area can be moved pre-determined distance by terminal to the leg direction of portrait, so that
It obtains the predeterminable area and is moved to face center, wherein the pre-determined distance can be flexibly arranged according to actual needs.
S206, terminal are fixed point with the center of the predeterminable area after movement, and the length of side of the predeterminable area after movement is expanded
For original preset multiple, obtain include the portrait upper part of the body target area.
Wherein, preset multiple can be flexibly arranged according to actual needs, for example, terminal is with the predeterminable area after movement
Center be fixed point, the length of side of the predeterminable area after movement is expanded as to 3.6 original multiples, obtains target area, the target
Region includes the portrait upper part of the body.
S207, terminal extract target area from image, obtain target image.
After obtaining target area, terminal can extract the target area from image, so as to obtain target figure
Picture, since the target area includes the portrait upper part of the body, obtained target image includes the portrait upper part of the body.For example, such as
Shown in Fig. 8, even if the portrait in image is lateral, can also accurately it extract including the portrait upper part of the body according to the method described above
Target image.
Terminal of the embodiment of the present invention can obtain the pixel at the center of portrait eyes in pending image, and determine picture
Coordinate of the vegetarian refreshments in pixel coordinate system, obtains the location information of eyes;Then according to the location information of eyes obtain eyes it
Between distance and eyes between point midway information, and predeterminable area is determined according to the distance and point midway information, and
Predeterminable area is moved into pre-determined distance to the leg direction of portrait, is fixed point with the center of the predeterminable area after movement, will move
The length of side of predeterminable area afterwards expands as original preset multiple, obtain include the portrait upper part of the body target area;So as to
Target area is extracted from image, obtains target image.The target image for obtaining automatically and including the portrait upper part of the body is realized, and
User is not needed either manually or by crop tool to obtain, avoids the influence of artificial subjective factor, image is improved and obtains just
Victory and reliability.
For ease of preferably implementing image processing method provided in an embodiment of the present invention, the embodiment of the present invention also provides one kind
Device based on above-mentioned image processing method.Wherein the meaning of noun is identical with above-mentioned image processing method, and specific implementation is thin
Section can be with the explanation in reference method embodiment.
Referring to Fig. 9, Fig. 9 is the structural schematic diagram of image processing apparatus provided in an embodiment of the present invention, the wherein image
Processing unit may include image acquisition unit 301, position acquisition unit 302, determination unit 303 and extraction unit 304 etc..
Wherein, image acquisition unit 301, for obtaining pending image, image includes portrait.
It may include portrait in pending image, can also include other objects etc., particular content is not made here
It limits.The portrait can be complete humanoid (i.e. people's limbs whole body), can also be the humanoid etc. of part of limb that be blocked, this
The portrait that image includes can be one or more.
The mode for obtaining pending image may include:Mode one, image acquisition unit 301 can be inquired local advance
The memory for storing image obtains image from the local memory, obtains pending image.Mode two, image acquisition unit
301 can send image to server obtains request, is obtained to image so that server obtains request based on the image received
Unit 301 is taken to return to the image got from the database of storage image, image acquisition unit 301 can receive server and return
The image returned, obtains pending image.Mode three, image acquisition unit 301 can open preset camera, be taken the photograph by this
As head acquisition image, pending image is obtained.It is understood that pending figure can also be obtained by other means
Picture, particular content are not construed as limiting here.
Position acquisition unit 302, the location information for obtaining eyes in portrait.
After obtaining pending image, position acquisition unit 302 can obtain the position letter of the eyes of portrait in image
Breath, wherein eyes include left eye eyeball and right eye eyeball, the location informations of the eyes may include left eye eyeball pixel coordinate position and
The pixel coordinate position of right eye eyeball, either, the location information of the eyes may include the two-dimensional coordinate position and the right side of left eye eyeball
The two-dimensional coordinate position of eyes.
When only including a portrait in image, position acquisition unit 302 can obtain the position letter of eyes in the portrait
Breath;When only including multiple portraits in image, position acquisition unit 302 can obtain the position letter of eyes in each portrait respectively
Breath.
In some embodiments, position acquisition unit 302 specifically can be used for:Two-dimentional rectangular co-ordinate is established in image
System;Obtain the pixel at the center of eyes in portrait;Determine that pixel in the two-dimensional coordinate position of two-dimensional Cartesian coordinate system, obtains
The location information of eyes.
Specifically, position acquisition unit 302 can establish two-dimentional right angle in image by coordinate origin of any one position
Coordinate system, for example, can using the center o of image as coordinate origin two-dimensional Cartesian coordinate system xoy, the two-dimensional Cartesian coordinate system
The maximum unit division of abscissa x and ordinate y can be flexibly arranged according to actual needs.Establishing two-dimentional rectangular co-ordinate
After system, position acquisition unit 302 can obtain the pixel at the center of eyes in portrait, that is, obtain the pixel of left eye center
With the pixel of right eye center, then position acquisition unit 302 pixel of right eye center can be determined at two-dimentional right angle
Two-dimensional coordinate position in coordinate system, obtains the location information of eyes.
In some embodiments, position acquisition unit 302 specifically can be used for:In the built-in vertical pixel coordinate system of image;
Obtain the pixel at the center of eyes in portrait;It determines coordinate of the pixel in pixel coordinate system, obtains the position letter of eyes
Breath.
Specifically, position acquisition unit 302 can establish pixel coordinate using any one position in image as coordinate origin
System, for example, the pixel coordinate system u-v as unit of pixel, pixel coordinate can be established using the image upper left corner as coordinate origin
The abscissa u and ordinate v of system are the columns and place line number where in the array of pixels of the image respectively.In another example can
To identify the region in image where the face of portrait, built-in vertical pixel coordinate system etc. in the region where face.
After establishing pixel coordinate system, position acquisition unit 302 can obtain the pixel at the center of eyes in portrait, i.e.,
The pixel of the pixel and right eye center of left eye center is obtained, then position acquisition unit 302 can determine left eye eyeball
Coordinate (i.e. pixel coordinate position) of the pixel at center in the pixel coordinate system, and determine the pixel of right eye center
Coordinate in pixel coordinate system, the corresponding coordinate of left eye eyeball and the corresponding coordinate of right eye eyeball are the location information of eyes.
In some embodiments, position acquisition unit 302 specifically can be used for:Pass through Face datection algorithm (English name
For FaceDetector) obtain portrait in eyes location information, wherein FaceDetector algorithms to pending image into
After row detection, the location information of eyes in portrait can be obtained, the location information of the eyes can be stored in the form of array,
The array can be two-dimensional array, and logarithm and location information of eyes etc. can be stored in the two-dimensional array.
It should be noted that the acquisition modes of the location information of eyes can also be that other acquisition modes, particular content exist
It is not construed as limiting herein.
Determination unit 303, for including the target area of the portrait upper part of the body according to the determination of the location information of eyes.
After obtaining the location information of eyes, determination unit 303 can determine target area according to the location information of eyes,
The target area include portrait above the waist, wherein portrait above the waist can be portrait head to the region between chest, certainly
Portrait can also be flexibly arranged according to actual needs above the waist, and particular content is not construed as limiting here.Wherein, target area
Shapes and sizes etc. it is vertical be flexibly arranged according to actual needs, for example, the target area can be square, rectangle, the heart
Shape, circle, diamond shape, ellipse or hexagon etc..When only getting the location information of a pair of of eyes, determination unit 303 can be with
Determination includes the target area that eyes are corresponded to the portrait upper part of the body;When getting the location information of multipair eyes, determine single
Member 303 can determine the target area that the portrait upper part of the body is corresponded to including each pair of eyes, so as to obtain multiple target areas.
In some embodiments, as shown in Figure 10, determination unit 303 may include obtaining subelement 3031 and determining sub
Unit 3032 etc., specifically can be as follows:
Subelement 3031 is obtained, for being obtained between the distance between eyes and eyes according to the location information of eyes
Point midway information;
Determination subelement 3032, for including the target area of the portrait upper part of the body according to distance and the determination of point midway information
Domain.
Specifically, subelement 3031 is obtained to be calculated according to the location information of the location information and right eye eyeball of left eye eyeball
The distance between left eye eyeball and right eye eyeball, and location information and right eye eyeball according to left eye eyeball location information, calculate left eye
Point midway information between eyeball and right eye eyeball.
For example, it is assumed that the location information of left eye eyeball is A (a1, a2), the location information of right eye eyeball is B (b1, b2), is calculated left
The formula of the distance between eyes and right eye eyeball d can following formula (1):
The formula for calculating the point midway information p between left eye eyeball and right eye eyeball can following formula (2):
Point midway information p between obtaining distance d and left eye eyeball and the right eye eyeball between left eye eyeball and right eye eyeball
Afterwards, determination subelement 3032 can include the target area of the portrait upper part of the body according to distance d and point midway information p determinations
Domain.
In some embodiments, as shown in figure 11, determination subelement 3032 can include determining that module 30321, movement
Module 30322 and extension module 30323 etc., specifically can be as follows:
Determining module 30321, for determining predeterminable area according to distance and point midway information;
Mobile module 30322, for predeterminable area to be moved pre-determined distance to the leg direction of portrait;
Extension module 30323, for being fixed point with the center of the predeterminable area after movement, by the predeterminable area after movement
The length of side expands as original preset multiple, obtain include the portrait upper part of the body target area.
Specifically, it is determined that module 30321 can be according to the distance between obtained left eye eyeball and right eye eyeball and left eye eyeball
Point midway information between right eye eyeball determines predeterminable area, wherein shapes and sizes of predeterminable area etc. can be according to reality
Border need flexibly be arranged, for example, the predeterminable area can be square, rectangle, heart, circle, diamond shape, ellipse or
Hexagon etc..
Determining module 30321 determines that the mode of predeterminable area may include:Mode one, according between left eye eyeball and right eye eyeball
Distance and left eye eyeball and right eye eyeball between point midway information determine two vertex, then according to obtain the two
The regions such as a quadrangle or circle are drawn on vertex, obtain predeterminable area.Mode two, with the midpoint between left eye eyeball and right eye eyeball
Position is the center of circle, and with the distance between left eye eyeball and right eye eyeball for diameter, draws a circle, is then with obtained circle
Inscribed circle draws a quadrilateral area, obtains predeterminable area.It should be noted that the mode that predeterminable area determines can also be
Other modes, particular content are not construed as limiting here.
After determining predeterminable area, predeterminable area can be moved to the leg direction of portrait and be preset by mobile module 30322
Distance, so that the predeterminable area is moved to face center, wherein the pre-determined distance can flexibly be set according to actual needs
It sets, for example, as shown in figure 4, predeterminable area can be moved to the 10% of entire portrait length to the leg direction of portrait.
Then, extension module 30323 is fixed point with the center of the predeterminable area after movement, for example, preset areas can be calculated
The length of side of predeterminable area after movement, is expanded as original preset multiple by the midpoint (CX, CY) in domain, obtains including portrait upper half
The midpoint of the target area of body, the target area is (CX, CY).Wherein, preset multiple can carry out flexibly according to actual needs
Setting, for example, as shown in figure 4, the ratio that can be obtained above the waist with face from Leonardesque perfect human body proportion is probably
3.6:1, the length of side that predeterminable area is extended with midpoint (CX, CY) is original 3.6 times, obtains target area.
In some embodiments, determining module 30321 may include the first acquisition submodule, the second acquisition submodule and
Determination sub-module etc., specifically can be as follows:
First acquisition submodule, for according to distance and point midway information, obtaining the coordinate position on the first vertex;Second
Acquisition submodule, for according to distance and point midway information, obtaining the coordinate position on the second vertex;Determination sub-module is used for
Predeterminable area is determined according to the coordinate position of the coordinate position on the first vertex and the second vertex, and the second vertex is with the first vertex pre-
If on the same diagonal line in region.
Specifically, the first acquisition submodule can be according to the distance between left eye eyeball and right eye eyeball and left eye eyeball and the right side
Point midway information between eyes obtains the coordinate position L (left, top) on the first vertex.
Optionally, the first acquisition submodule specifically can be used for:Obtain point midway information in X-coordinate value with apart from it
Between difference, obtain the X-coordinate value on the first vertex;The Y-coordinate value in point midway information and the difference between distance are obtained, is obtained
To the Y-coordinate value on the first vertex;The coordinate position on the first vertex is determined according to the X-coordinate value on the first vertex and Y-coordinate value.
For example, as shown in figure 3, centre bit confidence between the X-coordinate value left=left eyes eyeball and right eye eyeball on the first vertex
X-coordinate value-the distance between left eye eyeball and right eye eyeball in breath, the Y-coordinate value top=left eyes eyeball and right eye eyeball on the first vertex
Between center location information in Y-coordinate value-the distance between left eye eyeball and right eye eyeball, you can obtain the first vertex L's
Coordinate position L (left, top).
And second acquisition submodule can be according to the distance between left eye eyeball and right eye eyeball and left eye eyeball and right eye
Point midway information between eyeball obtains the coordinate position R (right, bottom) on the second vertex.
Optionally, the second acquisition submodule specifically can be used for:Obtain point midway information in X-coordinate value with apart from it
Between accumulated value, obtain the X-coordinate value on the second vertex;Obtain the Y-coordinate value in point midway information and adding up between distance
Value, obtains the Y-coordinate value on the second vertex;The coordinate bit on the second vertex is determined according to the X-coordinate value on the second vertex and Y-coordinate value
It sets.
For example, as shown in figure 3, centre bit confidence between the X-coordinate value right=left eyes eyeball and right eye eyeball on the second vertex
X-coordinate value+the distance between left eye eyeball and right eye eyeball in breath, the Y-coordinate value bottom=left eyes eyeball on the second vertex and the right side
Y-coordinate value+the distance between left eye eyeball and right eye eyeball in center location information between eyes, you can obtain the first vertex
The coordinate position L (right, bottom) of L.
After the coordinate position of the coordinate position and the second vertex that obtain the first vertex, which can with the first vertex
With on same diagonal line, i.e., on the same diagonal line of predeterminable area, determination sub-module can be according to the first vertex at this time
Coordinate position and the coordinate position on the second vertex determine predeterminable area, for example, as shown in figure 3, with the second vertex and the first vertex
Draw a quadrangle.
Extraction unit 304 obtains target image for extracting target area from image.
After obtaining target area, extraction unit 304 can extract the target area from image, so as to obtain
Target image, since the target area includes the portrait upper part of the body, obtained target image includes the portrait upper part of the body.
For example, as shown in figure 5, image acquisition unit 301 can receive image input instruction, and based on the image input instruction selection one
Pending image is opened, when the first image in pending image such as Fig. 5 (a), determination unit 303 can be determined such as figure
Target area in 5 (b) extracts target area, you can obtain including portrait by extraction unit 304 from pending image
The target image of the upper part of the body.It should be noted that after obtaining target image, image processing apparatus can carry out target image
It preserves or shares, can also be that target image adds the special efficacys such as shake, mirror image, evil spirit repetition and amplification, or target
Image adds word or textures etc..
In some embodiments, position acquisition unit 302 specifically can be used for:When pending image includes multiple
When portrait, the location information of the corresponding multipair eyes of multiple portraits is obtained.
Extraction unit 304 specifically can be used for:When target area includes multiple, each target area is extracted respectively,
Obtain multiple target images;Alternatively, extracting a target area at random, a target image is obtained.
Specifically, may include one or more portraits in pending image, when pending image includes multiple
When portrait, position acquisition unit 302 can obtain the location information of the corresponding multipair eyes of multiple portraits, and determination unit 303 can
To determine the target area for including the portrait upper part of the body according to the location information of each pair of eyes, multiple target areas are obtained.At this point, working as
When target area includes multiple, extraction unit 304 can extract each target area respectively, obtain multiple target images;Or
Person, extraction unit 304 can extract a target area at random, obtain a target image;Alternatively, extraction unit 304 can
To extract a target area according to preset rules, one or more target image is obtained, which can be according to reality
Border needs to be configured, and particular content is not construed as limiting here.
As shown in fig. 6, when pending image includes 2 portraits, according to the corresponding two pairs of eyes of this 2 portraits
Location information can determine that 2 target areas, image processing apparatus can extract this 2 target areas, obtain 2 respectively
Target image;Alternatively, a target area can be extracted at random, a target image is obtained;Alternatively, can be by 2 targets
One target area of region merging technique, obtains a target image, may include the upper part of the body of the two portraits in the target image.
In some embodiments, position acquisition unit 302 specifically can be used for:When pending image includes multiple
When portrait, the location information of the corresponding multipair eyes of multiple portraits is obtained.
Extraction unit 304 specifically can be used for:When target area includes multiple, prompt message is exported, and based on prompt
Information receives selection instruction input by user;One of target area is extracted according to selection instruction, obtains a target figure
Picture;Alternatively, extracting multiple target areas according to selection instruction, multiple target images are obtained.
Specifically, may include one or more portraits in pending image, when pending image includes multiple
When portrait, position acquisition unit 302 can obtain the location information of the corresponding multipair eyes of multiple portraits, and determination unit 303 can
To determine the target area for including the portrait upper part of the body according to the location information of each pair of eyes, multiple target areas are obtained.At this point, working as
When target area includes multiple, extraction unit 304 can export prompt message, and phase is made after checking the prompt message so as to user
The operation answered.The prompt message can be the related letter such as output " the half body photograph that please select to need to make " in the form of dialog box
Breath, or relevant informations such as output " the half body photograph for needing to make please be select ", etc. in the form of voice broadcast.Extraction unit
304, which can be based on prompt message, receives selection instruction input by user, then can extract one of them according to selection instruction
Target area obtains a target image;Alternatively, extracting multiple target areas according to selection instruction, multiple target figures are obtained
Picture.
From the foregoing, it will be observed that image acquisition unit of the embodiment of the present invention 301 can obtain the pending image including portrait, with
And the location information of eyes in portrait is obtained by position acquisition unit 302;Then determination unit 303 can be according to the position of eyes
Information determines that the image includes the target area of the portrait upper part of the body, to which extraction unit 304 can be extracted from the image
Target area obtains target image.The program realizes the target image for obtaining automatically and including the portrait upper part of the body, without with
Family is obtained either manually or by crop tool, avoids the influence of artificial subjective factor, improve image acquisition convenience and can
By property.
Correspondingly, the embodiment of the present invention also provides a kind of terminal, as shown in figure 12, the terminal may include radio frequency (RF,
Radio Frequency) circuit 601, the memory 602, defeated that includes one or more computer readable storage mediums
Enter unit 603, display unit 604, sensor 605, voicefrequency circuit 606, Wireless Fidelity (WiFi, Wireless Fidelity)
Module 607, include there are one or more than one processing core processor 608 and the components such as power supply 609.This field skill
Art personnel are appreciated that the restriction of the not structure paired terminal of terminal structure shown in Figure 12, may include than illustrate it is more or
Less component either combines certain components or different components arrangement.Wherein:
RF circuits 601 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station
After downlink information receives, one or the processing of more than one processor 608 are transferred to;In addition, the data for being related to uplink are sent to
Base station.In general, RF circuits 601 include but not limited to antenna, at least one amplifier, tuner, one or more oscillators, use
Family identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low-noise amplifier
(LNA, Low Noise Amplifier), duplexer etc..In addition, RF circuits 601 can also by radio communication with network and its
He communicates equipment.The wireless communication can use any communication standard or agreement, including but not limited to global system for mobile telecommunications system
Unite (GSM, Global System of Mobile communication), general packet radio service (GPRS, General
Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is more
Location (WCDMA, Wideband Code Division Multiple Access), long term evolution (LTE, Long Term
Evolution), Email, short message service (SMS, Short Messaging Service) etc..
Memory 602 can be used for storing software program and module, and processor 608 is stored in memory 602 by operation
Software program and module, to perform various functions application and data processing.Memory 602 can include mainly storage journey
Sequence area and storage data field, wherein storing program area can storage program area, the application program (ratio needed at least one function
Such as sound-playing function, image player function) etc.;Storage data field can be stored uses created data according to terminal
(such as audio data, phone directory etc.) etc..In addition, memory 602 may include high-speed random access memory, can also include
Nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Phase
Ying Di, memory 602 can also include Memory Controller, to provide processor 608 and input unit 603 to memory 602
Access.
Input unit 603 can be used for receiving the number or character information of input, and generate and user setting and function
Control related keyboard, mouse, operating lever, optics or the input of trace ball signal.Specifically, in a specific embodiment
In, input unit 603 may include touch sensitive surface and other input equipments.Touch sensitive surface, also referred to as touch display screen or tactile
Control plate, collect user on it or neighbouring touch operation (such as user using any suitable object such as finger, stylus or
Operation of the attachment on touch sensitive surface or near touch sensitive surface), and corresponding connection dress is driven according to preset formula
It sets.Optionally, touch sensitive surface may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined
The touch orientation of user is surveyed, and detects the signal that touch operation is brought, transmits a signal to touch controller;Touch controller from
Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 608, and can reception processing
Order that device 608 is sent simultaneously is executed.Furthermore, it is possible to a variety of using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Type realizes touch sensitive surface.In addition to touch sensitive surface, input unit 603 can also include other input equipments.Specifically, other are defeated
Enter equipment and can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse
It is one or more in mark, operating lever etc..
Display unit 604 can be used for showing information input by user or be supplied to user information and terminal it is various
Graphical user interface, these graphical user interface can be made of figure, text, icon, video and its arbitrary combination.Display
Unit 604 may include display panel, optionally, may be used liquid crystal display (LCD, Liquid Crystal Display),
The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configure display panel.Further
, touch sensitive surface can cover display panel, when touch sensitive surface detects on it or after neighbouring touch operation, send processing to
Device 608 is followed by subsequent processing device 608 and is provided on a display panel accordingly according to the type of touch event to determine the type of touch event
Visual output.Although in fig. 12, touch sensitive surface and display panel are to realize input and defeated as two independent components
Enter function, but in some embodiments it is possible to touch sensitive surface and display panel is integrated and realizes and outputs and inputs function.
Terminal may also include at least one sensor 605, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light
Light and shade adjust the brightness of display panel, proximity sensor can close display panel and/or the back of the body when terminal is moved in one's ear
Light.As a kind of motion sensor, gravity accelerometer can detect in all directions (generally three axis) acceleration
Size can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as terminal
The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 606, loud speaker, microphone can provide the audio interface between user and terminal.Voicefrequency circuit 606 can
By the transformed electric signal of the audio data received, it is transferred to loud speaker, voice signal output is converted to by loud speaker;It is another
The voice signal of collection is converted to electric signal by aspect, microphone, and audio data is converted to after being received by voicefrequency circuit 606, then
After the processing of audio data output processor 608, through RF circuits 601 to be sent to such as another terminal, or by audio data
Output is further processed to memory 602.Voicefrequency circuit 606 is also possible that earphone jack, with provide peripheral hardware earphone with
The communication of terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronics postal by WiFi module 607
Part, browsing webpage and access streaming video etc., it has provided wireless broadband internet to the user and has accessed.Although Figure 12 is shown
WiFi module 607, but it is understood that, and it is not belonging to must be configured into for terminal, it can not change as needed completely
Become in the range of the essence of invention and omits.
Processor 608 is the control centre of terminal, using the various pieces of various interfaces and connection whole mobile phone, is led to
It crosses operation or executes the software program and/or module being stored in memory 602, and call and be stored in memory 602
Data execute the various functions and processing data of terminal, to carry out integral monitoring to mobile phone.Optionally, processor 608 can wrap
Include one or more processing cores;Preferably, processor 608 can integrate application processor and modem processor, wherein answer
With the main processing operation system of processor, user interface and application program etc., modem processor mainly handles wireless communication.
It is understood that above-mentioned modem processor can not also be integrated into processor 608.
Terminal further includes the power supply 609 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe
Reason system and processor 608 are logically contiguous, to realize management charging, electric discharge and power managed by power-supply management system
Etc. functions.Power supply 609 can also include one or more direct current or AC power, recharging system, power failure inspection
The random components such as slowdown monitoring circuit, power supply changeover device or inverter, power supply status indicator.
Although being not shown, terminal can also include camera, bluetooth module etc., and details are not described herein.Specifically in this implementation
In example, the processor 608 in terminal can be corresponding by the process of one or more application program according to following instruction
Executable file is loaded into memory 602, and runs the application program of storage in the memory 602 by processor 608, from
And realize various functions:
Pending image is obtained, image includes portrait;Obtain the location information of eyes in portrait;According to the position of eyes
Confidence breath determination includes the target area of the portrait upper part of the body;Target area is extracted from image, obtains target image.
Optionally, may include according to the target area that the determination of the location information of eyes includes the steps that the portrait upper part of the body:
The point midway information between the distance between eyes and eyes is obtained according to the location information of eyes;According to distance and in
Dot position information determination includes the target area of the portrait upper part of the body.
Optionally, it can be wrapped according to the target area that distance and the determination of point midway information include the steps that the portrait upper part of the body
It includes:Predeterminable area is determined according to distance and point midway information;Predeterminable area is moved into pre-determined distance to the leg direction of portrait;
It is fixed point with the center of the predeterminable area after movement, the length of side of the predeterminable area after movement is expanded as to original preset multiple,
Obtain include the portrait upper part of the body target area.
Optionally, may include the step of the location information of eyes in acquisition portrait:In the built-in vertical pixel coordinate system of image;
Obtain the pixel at the center of eyes in portrait;It determines coordinate of the pixel in pixel coordinate system, obtains the position letter of eyes
Breath.
Optionally, may include the step of the location information of eyes in acquisition portrait:When pending image includes more
When a portrait, the location information of the corresponding multipair eyes of multiple portraits is obtained;Target area is extracted from image, obtains target
The step of image may include:When target area includes multiple, each target area is extracted respectively, obtains multiple target figures
Picture;Alternatively, extracting a target area at random, a target image is obtained.
Optionally, may include the step of the location information of eyes in acquisition face:When pending image includes more
When a portrait, the location information of the corresponding multipair eyes of multiple portraits is obtained;Target area is extracted from image, obtains target
The step of image may include:When target area includes multiple, prompt message is exported, and defeated based on prompt message reception user
The selection instruction entered;One of target area is extracted according to selection instruction, obtains a target image;Alternatively, according to choosing
It selects instruction and extracts multiple target areas, obtain multiple target images.
From the foregoing, it will be observed that the embodiment of the present invention can obtain the pending image including portrait, and obtain double in portrait
The location information of eye;Then it can determine that the image includes the target area of the portrait upper part of the body according to the location information of eyes,
So as to extract target area from the image, target image is obtained.The program realizes automatic obtain comprising on portrait
The target image of half body is obtained without user either manually or by crop tool, is avoided the influence of artificial subjective factor, is carried
The convenience and reliability that high image obtains.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, the detailed description above with respect to image processing method is may refer to, details are not described herein again.
It will appreciated by the skilled person that all or part of step in the various methods of above-described embodiment can be with
It is completed by instructing, or controls relevant hardware by instructing and complete, which can be stored in one and computer-readable deposit
In storage media, and is loaded and executed by processor.
For this purpose, the embodiment of the present invention provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be handled
Device is loaded, to execute the step in any image processing method that the embodiment of the present invention is provided.For example, the instruction can
To execute following steps:
Pending image is obtained, image includes portrait;Obtain the location information of eyes in portrait;According to the position of eyes
Confidence breath determination includes the target area of the portrait upper part of the body;Target area is extracted from image, obtains target image.
Optionally, may include according to the target area that the determination of the location information of eyes includes the steps that the portrait upper part of the body:
The point midway information between the distance between eyes and eyes is obtained according to the location information of eyes;According to distance and in
Dot position information determination includes the target area of the portrait upper part of the body.
Optionally, it can be wrapped according to the target area that distance and the determination of point midway information include the steps that the portrait upper part of the body
It includes:Predeterminable area is determined according to distance and point midway information;Predeterminable area is moved into pre-determined distance to the leg direction of portrait;
It is fixed point with the center of the predeterminable area after movement, the length of side of the predeterminable area after movement is expanded as to original preset multiple,
Obtain include the portrait upper part of the body target area.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Wherein, which may include:Read-only memory (ROM, Read Only Memory), random access memory
Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, can execute at any image that the embodiment of the present invention is provided
Step in reason method, it is thereby achieved that achieved by any image processing method that the embodiment of the present invention is provided
Advantageous effect refers to the embodiment of front, and details are not described herein.
It is provided for the embodiments of the invention a kind of image processing method, device and storage medium above and has carried out detailed Jie
It continues, principle and implementation of the present invention are described for specific case used herein, and the explanation of above example is only
It is the method and its core concept for being used to help understand the present invention;Meanwhile for those skilled in the art, according to the present invention
Thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as
Limitation of the present invention.
Claims (14)
1. a kind of image processing method, which is characterized in that including:
Pending image is obtained, described image includes portrait;
Obtain the location information of eyes in the portrait;
Include the target area of the portrait upper part of the body according to the determination of the location information of the eyes;
The target area is extracted from described image, obtains target image.
2. image processing method according to claim 1, which is characterized in that the location information according to the eyes is true
Surely the target area for including the steps that the portrait upper part of the body includes:
The point midway letter between the distance between described eyes and the eyes is obtained according to the location information of the eyes
Breath;
Include the target area of the portrait upper part of the body according to the distance and point midway information determination.
3. image processing method according to claim 2, which is characterized in that described according to the distance and the middle point
Confidence breath determination includes the steps that the target area of the portrait upper part of the body includes:
Predeterminable area is determined according to the distance and the point midway information;
The predeterminable area is moved into pre-determined distance to the leg direction of the portrait;
It is fixed point with the center of the predeterminable area after movement, the length of side of the predeterminable area after movement is expanded as into original default times
Number, obtain include the portrait upper part of the body target area.
4. image processing method according to claim 3, which is characterized in that described according to the distance and the middle point
Confidence breath determine predeterminable area the step of include:
According to the distance and the point midway information, the coordinate position on the first vertex is obtained;
According to the distance and the point midway information, the coordinate position on the second vertex is obtained;
Predeterminable area, second top are determined according to the coordinate position of the coordinate position on first vertex and second vertex
Point is with first vertex on the same diagonal line of the predeterminable area.
5. image processing method according to claim 4, which is characterized in that described according to the distance and the middle point
Confidence cease, obtain the first vertex coordinate position the step of include:
The X-coordinate value in the point midway information and the difference between the distance are obtained, the X-coordinate on the first vertex is obtained
Value;
The Y-coordinate value in the point midway information and the difference between the distance are obtained, the Y for obtaining first vertex is sat
Scale value;
The coordinate position on first vertex is determined according to the X-coordinate value on first vertex and Y-coordinate value.
6. image processing method according to claim 4, which is characterized in that described according to the distance and the middle point
Confidence cease, obtain the second vertex coordinate position the step of include:
The X-coordinate value in the point midway information and the accumulated value between the distance are obtained, the X-coordinate on the second vertex is obtained
Value;
The Y-coordinate value in the point midway information and the accumulated value between the distance are obtained, the Y on second vertex is obtained
Coordinate value;
The coordinate position on second vertex is determined according to the X-coordinate value on second vertex and Y-coordinate value.
7. image processing method according to claim 1, which is characterized in that the position for obtaining eyes in the portrait
The step of information includes:
In the built-in vertical pixel coordinate system of described image;
Obtain the pixel at the center of eyes in the portrait;
It determines coordinate of the pixel in the pixel coordinate system, obtains the location information of the eyes.
8. image processing method according to any one of claims 1 to 7, which is characterized in that described to obtain in the portrait
The step of location information of eyes includes:
When the pending image includes multiple portraits, the position letter of the corresponding multipair eyes of the multiple portrait is obtained
Breath;
Described that the target area is extracted from described image, the step of obtaining target image, includes:
When the target area includes multiple, each target area is extracted respectively, obtains multiple target images;Alternatively, with
Machine extracts a target area, obtains a target image.
9. image processing method according to any one of claims 1 to 7, which is characterized in that described to obtain in the face
The step of location information of eyes includes:
When the pending image includes multiple portraits, the position letter of the corresponding multipair eyes of the multiple portrait is obtained
Breath;
Described that the target area is extracted from described image, the step of obtaining target image, includes:
When the target area includes multiple, prompt message is exported, and choosing input by user is received based on the prompt message
Select instruction;
One of target area is extracted according to the selection instruction, obtains a target image;Alternatively, according to the selection
Instruction extracts multiple target areas, obtains multiple target images.
10. a kind of image processing apparatus, which is characterized in that including:
Image acquisition unit, for obtaining pending image, described image includes portrait;
Position acquisition unit, the location information for obtaining eyes in the portrait;
Determination unit, for including the target area of the portrait upper part of the body according to the determination of the location information of the eyes;
Extraction unit obtains target image for extracting the target area from described image.
11. image processing apparatus according to claim 10, which is characterized in that the determination unit includes:
Subelement is obtained, for obtaining the distance between described eyes and the eyes according to the location information of the eyes
Between point midway information;
Determination subelement, for including the target area of the portrait upper part of the body according to the distance and point midway information determination
Domain.
12. image processing apparatus according to claim 11, which is characterized in that the determination subelement includes:
Determining module, for determining predeterminable area according to the distance and the point midway information;
Mobile module, for the predeterminable area to be moved pre-determined distance to the leg direction of the portrait;
Extension module expands the length of side of the predeterminable area after movement for being fixed point with the center of the predeterminable area after movement
For original preset multiple, obtain include the portrait upper part of the body target area.
13. image processing apparatus according to claim 12, which is characterized in that the determining module includes:
First acquisition submodule, for according to the distance and the point midway information, obtaining the coordinate position on the first vertex;
Second acquisition submodule, for according to the distance and the point midway information, obtaining the coordinate position on the second vertex;
Determination sub-module, it is default for being determined according to the coordinate position on first vertex and the coordinate position on second vertex
Region, second vertex is with first vertex on the same diagonal line of the predeterminable area.
14. a kind of storage medium, which is characterized in that the storage medium is stored with a plurality of instruction, and described instruction is suitable for processor
It is loaded, the step in 1 to 9 any one of them image processing method is required with perform claim.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810275707.1A CN108376255B (en) | 2018-03-30 | 2018-03-30 | Image processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810275707.1A CN108376255B (en) | 2018-03-30 | 2018-03-30 | Image processing method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108376255A true CN108376255A (en) | 2018-08-07 |
CN108376255B CN108376255B (en) | 2023-06-30 |
Family
ID=63019330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810275707.1A Active CN108376255B (en) | 2018-03-30 | 2018-03-30 | Image processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108376255B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020177583A1 (en) * | 2019-03-01 | 2020-09-10 | 华为技术有限公司 | Image cropping method and electronic device |
CN112966578A (en) * | 2021-02-23 | 2021-06-15 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090059029A1 (en) * | 2007-08-30 | 2009-03-05 | Seiko Epson Corporation | Image Processing Device, Image Processing Program, Image Processing System, and Image Processing Method |
CN102609684A (en) * | 2012-01-16 | 2012-07-25 | 宁波江丰生物信息技术有限公司 | Human body posture detection method and device |
-
2018
- 2018-03-30 CN CN201810275707.1A patent/CN108376255B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090059029A1 (en) * | 2007-08-30 | 2009-03-05 | Seiko Epson Corporation | Image Processing Device, Image Processing Program, Image Processing System, and Image Processing Method |
CN102609684A (en) * | 2012-01-16 | 2012-07-25 | 宁波江丰生物信息技术有限公司 | Human body posture detection method and device |
Non-Patent Citations (1)
Title |
---|
盛先: "基于人眼检测的视频前景自动提取算法研究", 《中国优秀硕士论文电子期刊网》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020177583A1 (en) * | 2019-03-01 | 2020-09-10 | 华为技术有限公司 | Image cropping method and electronic device |
CN112966578A (en) * | 2021-02-23 | 2021-06-15 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108376255B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104135609B (en) | Auxiliary photo-taking method, apparatus and terminal | |
CN107589963B (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN104519485B (en) | Communication means, device and system between a kind of terminal | |
CN108255304A (en) | Video data handling procedure, device and storage medium based on augmented reality | |
CN106296617B (en) | The processing method and processing device of facial image | |
US20170147187A1 (en) | To-be-shared interface processing method, and terminal | |
CN107590463A (en) | Face identification method and Related product | |
CN106547844B (en) | A kind for the treatment of method and apparatus of user interface | |
CN109445894A (en) | A kind of screenshot method and electronic equipment | |
US9940448B2 (en) | Unlock processing method and device | |
CN108037871A (en) | Screenshotss method and mobile terminal | |
CN107977132A (en) | A kind of method for information display and mobile terminal | |
EP2890105A1 (en) | Method, apparatus and terminal for generating thumbnail of image | |
CN104123276B (en) | The hold-up interception method of pop-up, device and system in a kind of browser | |
CN108073343A (en) | A kind of display interface method of adjustment and mobile terminal | |
CN108573189B (en) | Method and device for acquiring queuing information | |
CN106959761A (en) | A kind of terminal photographic method, device and terminal | |
CN107864336B (en) | A kind of image processing method, mobile terminal | |
CN108459788B (en) | Picture display method and terminal | |
CN106203254A (en) | A kind of adjustment is taken pictures the method and device in direction | |
CN109857297A (en) | Information processing method and terminal device | |
CN104820546B (en) | Function information methods of exhibiting and device | |
CN108898552A (en) | Picture joining method, double screen terminal and computer readable storage medium | |
CN108197934A (en) | A kind of method of payment and terminal device | |
CN112488914A (en) | Image splicing method, device, terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |