CN107945126A - Spectacle-frame removing method, device and medium in a kind of image - Google Patents
Spectacle-frame removing method, device and medium in a kind of image Download PDFInfo
- Publication number
- CN107945126A CN107945126A CN201711158849.1A CN201711158849A CN107945126A CN 107945126 A CN107945126 A CN 107945126A CN 201711158849 A CN201711158849 A CN 201711158849A CN 107945126 A CN107945126 A CN 107945126A
- Authority
- CN
- China
- Prior art keywords
- spectacle
- edge
- pixel
- frame
- frame region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000001514 detection method Methods 0.000 claims abstract description 29
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 12
- 238000003860 storage Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 11
- 238000005260 corrosion Methods 0.000 claims description 9
- 230000007797 corrosion Effects 0.000 claims description 9
- 239000011521 glass Substances 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 5
- 108010001267 Protein Subunits Proteins 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000000513 principal component analysis Methods 0.000 description 4
- 230000008030 elimination Effects 0.000 description 3
- 238000003379 elimination reaction Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 101100400452 Caenorhabditis elegans map-2 gene Proteins 0.000 description 2
- 101150064138 MAP1 gene Proteins 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses spectacle-frame removing method, device and medium in a kind of image, to eliminate the spectacle-frame included in image, reduces spectacle-frame and eliminates difficulty, improve the accuracy of face recognition result.Spectacle-frame removing method in image, including:Utilize the eye areas in default eye detection algorithm detection image;The eye areas detected is amplified to obtain lens area;Identify the spectacle-frame region in the lens area;The spectacle-frame region that the average filling for including pixel using the perimeter in the spectacle-frame region identifies.
Description
Technical field
The present invention relates to spectacle-frame removing method, device and Jie in technical field of image processing, more particularly to a kind of image
Matter.
Background technology
Background that this section is intended to provide an explanation of the embodiments of the present invention set forth in the claims or context.Herein
Description recognizes it is the prior art not because not being included in this part.
With the development of computer technology and the leap of face recognition algorithms, face recognition technology is in some occasions
Applied, the authentication entered the station such as public security system, railway station.The application of face recognition technology not only reduces artificial
Working strength, and greatly improve the efficiency.But the effect of recognition of face is influenced be subject to several factors, wherein glasses are special
It is the performance that recognition of face can be greatly reduced in wide spectacle-frame glasses.Therefore, the spectacle-frame how eliminated in the image of collection becomes
One of technical problem urgently to be resolved hurrily in the prior art.
At present, in the prior art eliminate spectacle-frame method be based on PCA (Principal Component Analysis,
Principal component analysis) method.This kind of algorithm first has to collect the image that same person wears glasses and do not wear glasses, then to image
Cutting alignment is carried out, the principal component of image is finally obtained using PCA algorithms.Using principal component to the face worn glasses that newly inputs
Image is reconstructed, and obtains the image of no glasses.But this kind of method is more demanding to image alignment, in practical applications very
Difficulty meets alignment requirements.
The content of the invention
The embodiment of the present invention provides spectacle-frame removing method, device and medium in a kind of image, is wrapped to eliminate in image
The spectacle-frame contained, reduces spectacle-frame and eliminates difficulty, improve the accuracy of face recognition result.
First aspect, there is provided spectacle-frame removing method in a kind of image, including:
Utilize the eye areas in default eye detection algorithm detection image;
The eye areas detected is amplified to obtain lens area;
Identify the spectacle-frame region in the lens area;
The spectacle-frame region that the average filling for including pixel using the perimeter in the spectacle-frame region identifies.
Alternatively, identify the spectacle-frame region in the lens area, specifically include:
The edge of the lens area is detected using Canny edge detection operators;
Calculate the length at each edge detected;
According to 4 edges of sequential selection from long to short;
Expansion process is carried out respectively to the edge selected and corrosion treatment obtains spectacle-frame region.
Alternatively, the length at each edge detected is calculated, is specifically included:
Count the quantity of the contiguous pixels included in each edge;
For each edge detected, the quantity for determining the contiguous pixels that the edge includes is the length at the edge.
Alternatively, the spectacle-frame that the average filling for including pixel using the peripheral region in the spectacle-frame region identifies
Region, specifically includes:
The each pixel included for the spectacle-frame region, determines the corresponding neighborhood of the pixel;And
The average for including pixel using neighborhood fills the pixel.
Second aspect, there is provided spectacle-frame cancellation element in a kind of image, including:
First detection unit, for utilizing the eye areas in default eye detection algorithm detection image;
Amplifying unit, for amplifying to obtain lens area to the eye areas detected;
Recognition unit, for identifying the spectacle-frame region in the lens area;
Fills unit, what the average filling for including pixel using the perimeter in the spectacle-frame region identified
Spectacle-frame region.
Alternatively, the recognition unit, including:
Detection sub-unit, for detecting the edge of the lens area using Canny edge detection operators;
Computation subunit, for calculating the length at each edge detected;
Subelement is selected, for according to 4 edges of sequential selection from long to short;
Subelement is handled, the edge for selecting carries out expansion process respectively and corrosion treatment obtains spectacle-frame region.
Alternatively, the computation subunit, for counting the quantity of the contiguous pixels included in each edge;For detection
The each edge arrived, the quantity for determining the contiguous pixels that the edge includes are the length at the edge.
Alternatively, the fills unit, specifically for each pixel included for the spectacle-frame region, determines the picture
The corresponding neighborhood of element;And fill the pixel using the average that neighborhood includes pixel.
The third aspect, there is provided a kind of computing device, including at least one processing unit and at least one storage unit,
Wherein, the storage unit is stored with computer program, when described program is performed by the processing unit so that the processing
Unit performs the step described in any of the above-described method.
Fourth aspect, there is provided a kind of computer-readable medium, it is stored with the computer program that can be performed by computing device,
When described program is run on the computing device so that the computing device performs the step described in any of the above-described method.
In image provided in an embodiment of the present invention in spectacle-frame removing method, device and medium, eye areas is identified first,
Then eye areas is amplified to obtain lens area, spectacle-frame region is identified from lens area, finally utilize spectacle-frame area
The perimeter in domain includes the average filling spectacle-frame region of pixel, thus the spectacle-frame in elimination image, in the above process,
Without especially being handled image, therefore reduce spectacle-frame and eliminate difficulty, improve the accuracy of face recognition result.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations write
Specifically noted structure is realized and obtained in book, claims and attached drawing.
Brief description of the drawings
Attached drawing described herein is used for providing a further understanding of the present invention, forms the part of the present invention, this hair
Bright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the implementation process diagram of spectacle-frame removing method in the image according to embodiment of the present invention;
Fig. 2 is the flow diagram according to the identification spectacle-frame region of embodiment of the present invention;
Fig. 3 is the schematic diagram according to the spectacle-frame edge identified of embodiment of the present invention;
Fig. 4 is the spectacle-frame area schematic after the carry out expansion process according to embodiment of the present invention;
Fig. 5 is the spectacle-frame area schematic after the progress corrosion treatment according to embodiment of the present invention;
Fig. 6 a are the schematic diagram according to the neighborhood of the definite any pixel of embodiment of the present invention;
Fig. 6 b are to fill principle schematic according to the spectacle-frame area pixel of embodiment of the present invention;
Fig. 7 is the structure diagram of spectacle-frame cancellation element in the image according to embodiment of the present invention;
Fig. 8 is the structure diagram according to the computing device of embodiment of the present invention.
Embodiment
Difficulty is eliminated in order to reduce spectacle-frame, improves the accuracy of face recognition result, an embodiment of the present invention provides one
Spectacle-frame removing method, device and medium in kind image.
The preferred embodiment of the present invention is illustrated below in conjunction with Figure of description, it will be appreciated that described herein
Preferred embodiment is merely to illustrate and explain the present invention, and is not intended to limit the present invention, and in the case where there is no conflict, this hair
The feature in embodiment and embodiment in bright can be mutually combined.
As shown in Figure 1, it is the implementation process diagram of spectacle-frame removing method in image provided in an embodiment of the present invention,
It may comprise steps of:
S11, utilize the eye areas in default eye detection algorithm detection image.
When it is implemented, for given image, first with opencv (Open Source Computer Vision
Library, computer vision of increasing income storehouse) in eye detection method detection image in the eye areas that includes.
S12, amplify to obtain lens area to the eye areas detected.
In this step, the eye areas detected in step S11 can be amplified, for example, the eyes that will be detected
Amplify 1.1 times or 1.2 times and obtain the lens area included in image in region.
Spectacle-frame region in S13, the identification lens area.
The spectacle-frame area that S14, the average for including pixel using the perimeter in spectacle-frame region filling identify
Domain.
When it is implemented, can be according to the spectacle-frame region in the flow identification lens area shown in Fig. 2 in step S13:
S131, the edge using the Canny edge detection operators detection lens area.
In this step, since the color in spectacle-frame region is usually deeper than the color in other regions, spectacle-frame
The pixel value for the pixel that region includes is obvious different from the pixel value presence in other regions, based on this, edge can be utilized to examine
Method of determining and calculating obtains the spectacle-frame region included in eye areas.
Specifically, it is detected in this step using the spectacle-frame region that step S12 is obtained as input with Canny operators
To obtain the edge of lens area.
The length at each edge that S132, calculating detect.
Edge in lens area is mainly spectacle-frame edge and eyes edge, therefore can detect to obtain six main sides
Edge, wherein including four spectacle-frame edges and two eyes edges.It is special based on this and spectacle-frame edge is longer than eyes edge
Point, in step S122, can calculate the length at each edge detected.Specifically, can be with for each edge detected
The quantity of the contiguous pixels included in the edge is counted, the quantity for determining the contiguous pixels that the edge includes is the length at the edge
Degree.
S133, according to 4 edges of sequential selection from long to short.
Since spectacle-frame edge length is more than eyes edge length, longer 4 edges composition spectacle-frame region.Therefore,
According to each edge length obtained in step S122 in step S123, according to 4 edges of sequential selection from long to short, such as scheme
Shown in 3, it is the spectacle-frame edge of the longer edge composition of 4 selected, is respectively edge 1, edge 2, edge 3 and edge
4。
S134, carry out expansion process and corrosion treatment obtains spectacle-frame region respectively to the edge selected.
Due to simply obtaining four edges of spectacle-frame in step S123, in this step, applied morphology is to selection
The edge gone out is handled, the annular region for first being formed edge overstriking, filling eye box edge with expansion so that adjacent
Edge connects together.Corrosion treatment then is used, will be refined on the outside of edge, the region brought due to expansion process is eliminated and expands
Problem.
Specifically, binary map 1 is generated using the four edges edge selected, n expansion process is carried out to binary map 1, to protect
Card spectacle-frame annular region is all filled, at this time generation binary map 2, and the spectacle-frame region after expansion process is as shown in figure 4, expansion
Region afterwards includes filling region 1 (the grey filling region i.e. in figure) and filling region 2 (white filling region i.e. in figure).Figure
Lines in 4 are the edge in spectacle-frame region.
Finally, n corrosion treatment is carried out to binary map 2, the pixel outside annular region is eliminated, so as to obtain the essence of eye box
True position, as shown in figure 5, removing the filling region 1 (grey filling region) in Fig. 4.Only remain spectacle-frame region.Its
In, n is the positive integer more than 1.
After definite spectacle-frame region, using the method filling spectacle-frame region of neighborhood territory pixel average, when it is implemented,
The spectacle-frame region includes inward flange and outer edge;Identified using the pixel average filling of the outer peripheral external pixels
Spectacle-frame region.Specifically, each pixel included for the spectacle-frame region, can obtain according to the scale of spectacle-frame
The corresponding neighborhood of the pixel, calculates the average of each neighborhood territory pixel value, and is filled into using the average that neighborhood includes pixel
In the corresponding pixel in spectacle-frame region, to keep spectacle-frame zone-texture consistent with other zone-textures.
When it is implemented, being directed to any pixel, the neighborhood of the pixel can be determined in accordance with the following methods:
Step 1, for any pixel, determine the corresponding object pixel of the pixel.
Wherein, object pixel is identical with the abscissa (i.e. x coordinate) of the pixel, and object pixel can have it is multiple, for example,
Object pixel can be two pixels.
When it is implemented, can be from the perimeter identical with the pixel abscissa (beyond spectacle-frame region outer edge
Region) in, the pixel of selection and r pixel of pixel separation on outer edge starts to determine that spectacle-frame region includes and this successively
The identical corresponding object pixel of each pixel of pixel abscissa.Multiple object pixels can be determined according to different r values, more preferably
Ground, when it is implemented, two object pixels can be selected.For example, when r values are respectively 2 and 3, it may be determined that two target pictures
Element.
As shown in Figure 6 b, pixel G1, G2 ... ... Gn are all pixels that abscissa is identical on spectacle-frame region, wherein, G1
On the inward flange in spectacle-frame region, Gn is located on the outer edge in spectacle-frame region, identical with Gn abscissas in perimeter
And the pixel for being separated by two pixels is S1,2;Pixel that is identical with Gn abscissas and being separated by 3 pixels is S2,2, then it can determine
Corresponding two object pixels of G1 are S1,2 and S2,2;Corresponding two object pixels of G2 are S2,2 and S3,2;G3 corresponding two
A object pixel is S3,2 and S4,2, and so on, corresponding two object pixels of Gn are Sn, 2 and Sn+1,2.
Step 2, using object pixel as center, determine object pixel and its left and right it is adjacent pixel composition region for should
The neighborhood of pixel.
As shown in Figure 6 a, G1 ..., Gn represent all pixels in the spectacle-frame region somewhere on y directions, S1,1 ..., Sn+
1,3 represents the external pixels region of spectacle-frame region outer edge, wherein S1,2, S2,2, S3,2 ..., Sn+1, and 2 and G1 ..., Gn
X coordinate it is identical.S1,2 and Gn is separated by r pixel.
By taking G1 as an example, the neighborhood of G1 can be determined in accordance with the following methods:Choose most upper in outer peripheral external pixels region
Two row pixels, it is corresponding in spectacle-frame regional location with G1;Using two the pixel S1,2s and S2,2 identical with the x coordinate of G1 in
The heart, chooses their own left and right adjacent pixel:S1,1, S1,3 and S2,1, S2,3.Filled with the average value of this six pixels
G1.Similarly G2 S2,1, S2,2, S2,3, S3,1, S3,2, S3,3 average value fill G2, and so on.
As shown in fig. 6, it is the principle schematic of spectacle-frame area pixel filling.Wherein, region 61 represents spectacle-frame area
Domain, region 62 represent the skin area of spectacle-frame areas adjacent, and G1, G2 and G3 are the pixel in spectacle-frame edges of regions, wherein,
G1 is the pixel on the internal edge of spectacle-frame region, and G3 is the pixel on spectacle-frame region exterior edge.When it is implemented, G1 pictures
Element can use the average of S1, S2, S3, S4, S5 and S6 pixel to fill, and G2 pixels can use S4, S5, S6, S7, S8 and S9 pixel
Average is filled, and G3 pixels can use the average of S7, S8, S9, S10, S11 and S12 pixel to fill, and so on, until having filled
The each pixel included into spectacle-frame region.
In image provided in an embodiment of the present invention in spectacle-frame removing method, device and medium, eye areas is identified first,
Then eye areas is amplified to obtain lens area, spectacle-frame region is identified from lens area, finally utilize spectacle-frame area
The perimeter in domain includes the average filling spectacle-frame region of pixel, thus the spectacle-frame in elimination image, in the above process,
Without especially being handled image, therefore reduce spectacle-frame and eliminate difficulty, improve the accuracy of face recognition result.
Based on same inventive concept, spectacle-frame cancellation element in a kind of image is additionally provided in the embodiment of the present invention, due to
The principle that above device and equipment solve the problems, such as is similar to spectacle-frame removing method in image, therefore the implementation of above device can be with
Referring to the implementation of method, overlaps will not be repeated.
As shown in fig. 7, it is structure diagram of spectacle-frame cancellation element in image provided in an embodiment of the present invention, can be with
Including:
First detection unit 71, for utilizing the eye areas in default eye detection algorithm detection image;
Amplifying unit 72, for amplifying to obtain lens area to the eye areas detected;
Recognition unit 73, for identifying the spectacle-frame region in the lens area;
Fills unit 74, the average filling for including pixel using the perimeter in the spectacle-frame region identify
Spectacle-frame region.
Alternatively, the recognition unit, including:
Detection sub-unit, for detecting the edge of the lens area using Canny edge detection operators;
Computation subunit, for calculating the length at each edge detected;
Subelement is selected, for according to 4 edges of sequential selection from long to short;
Subelement is handled, the edge for selecting carries out expansion process respectively and corrosion treatment obtains spectacle-frame region.
Alternatively, the computation subunit, for counting the quantity of the contiguous pixels included in each edge;For detection
The each edge arrived, the quantity for determining the contiguous pixels that the edge includes are the length at the edge.
Alternatively, the fills unit, specifically for each pixel included for the spectacle-frame region, determines the picture
The corresponding neighborhood of element;And fill the pixel using the average that neighborhood includes pixel.
For convenience of description, above each several part is divided by function describes respectively for each module (or unit).Certainly, exist
Implement the function of each module (or unit) can be realized in same or multiple softwares or hardware during the present invention.
In the image for describe exemplary embodiment of the invention after spectacle-frame removing method and device, next,
Introduce the computing device of another exemplary embodiment according to the present invention.
Person of ordinary skill in the field it is understood that various aspects of the invention can be implemented as system, method or
Program product.Therefore, various aspects of the invention can be implemented as following form, i.e.,:It is complete hardware embodiment, complete
The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.), or hardware and software, can unite here
Referred to as " circuit ", " module " or " system ".
In some possible embodiments, it is single that computing device according to the present invention can include at least at least one processing
Member and at least one storage unit.Wherein, the storage unit has program stored therein code, when said program code is described
When processing unit performs so that the processing unit performs the exemplary implementations various according to the present invention of this specification foregoing description
Step in the image of mode in spectacle-frame removing method.For example, the step of processing unit can perform as shown in fig. 1
S11, using the eye areas in default eye detection algorithm detection image, and step S12, the eye areas detected is put
It is big to obtain the spectacle-frame region in lens area, and step S13, the identification lens area;With step S14, using described
The spectacle-frame region that the average filling that the perimeter in spectacle-frame region includes pixel identifies.
The computing device 80 of this embodiment according to the present invention is described referring to Fig. 8.The calculating dress that Fig. 8 is shown
It is only an example to put 80, should not bring any restrictions to the function and use scope of the embodiment of the present invention.
As shown in figure 8, computing device 80 is showed in the form of universal computing device.The component of computing device 80 can include
But it is not limited to:Above-mentioned at least one processing unit 81, above-mentioned at least one storage unit 82, connection different system component (including
Storage unit 82 and processing unit 81) bus 83.
Bus 83 represents the one or more in a few class bus structures, including memory bus or Memory Controller,
Peripheral bus, processor or the local bus using any bus structures in a variety of bus structures.
Storage unit 82 can include the computer-readable recording medium of form of volatile memory, such as random access memory (RAM)
821 and/or cache memory 822, it can further include read-only storage (ROM) 823.
Storage unit 82 can also include program/utility 825 with one group of (at least one) program module 824,
Such program module 824 includes but not limited to:Operating system, one or more application program, other program modules and
Routine data, may include the realization of network environment in each or certain combination in these examples.
Computing device 80 can also communicate with one or more external equipments 84 (such as keyboard, sensing equipment etc.), may be used also
The equipment communication that is interacted with computing device 80 is enabled a user to one or more, and/or with enabling the computing device 80
Any equipment (such as the router, modem etc.) communication to communicate with one or more of the other computing device.This
Kind communication can be carried out by input/output (I/O) interface 85.Also, computing device 80 can also pass through network adapter 86
With one or more network (such as LAN (LAN), wide area network (WAN) and/or public network, such as internet) communication.
As shown in the figure, network adapter 86 is communicated by bus 83 with other modules for computing device 80.It will be appreciated that though figure
Not shown in, computing device 80 can be combined and use other hardware and/or software module, included but not limited to:Microcode, equipment
Driver, redundant processing unit, external disk drive array, RAID system, tape drive and data backup storage system
Deng.
In some possible embodiments, the various aspects of spectacle-frame removing method may be used also in image provided by the invention
In the form of being embodied as a kind of program product, it includes program code, when described program product is run on a computing device,
Said program code is used for the exemplary realities various according to the present invention for making the computer equipment perform this specification foregoing description
The step in spectacle-frame removing method in the image of mode is applied, for example, the computer equipment can perform as shown in Figure 1
Step S11, the eye areas in default eye detection algorithm detection image, and step S12, the eyes area to detecting are utilized
Amplify to obtain the spectacle-frame region in lens area, and step S13, the identification lens area in domain;With step S14, utilization
The spectacle-frame region that the average filling that the perimeter in the spectacle-frame region includes pixel identifies.
Described program product can use any combination of one or more computer-readable recording mediums.Computer-readable recording medium can be readable letter
Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example may be-but not limited to-electricity, magnetic, optical, electromagnetic, red
The system of outside line or semiconductor, device or device, or any combination above.The more specifically example of readable storage medium storing program for executing
(non exhaustive list) includes:Electrical connection, portable disc with one or more conducting wires, hard disk, random access memory
(RAM), read-only storage (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc
Read memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
The program product for being used for spectacle-frame elimination of embodiments of the present invention can be deposited using portable compact disc is read-only
Reservoir (CD-ROM) and including program code, and can run on the computing device.However, the program product of the present invention is not limited to
This, in this document, readable storage medium storing program for executing can be any includes or the tangible medium of storage program, the program can be commanded
The either device use or in connection of execution system, device.
Readable signal medium can be included in a base band or as a part of data-signal propagated of carrier wave, wherein carrying
Readable program code.The data-signal of this propagation can take various forms, including --- but being not limited to --- electromagnetism letter
Number, optical signal or above-mentioned any appropriate combination.Readable signal medium can also be beyond readable storage medium storing program for executing it is any can
Read medium, which can send, propagate either transmission be used to be used by instruction execution system, device or device or
Program in connection.
The program code included on computer-readable recording medium can be transmitted with any appropriate medium, including --- but being not limited to ---
Wirelessly, wired, optical cable, RF etc., or above-mentioned any appropriate combination.
It can be write with any combination of one or more programming languages for performing the program that operates of the present invention
Code, described program design language include object oriented program language-Java, C++ etc., further include conventional
Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user
Perform on computing device, partly perform on a user device, the software kit independent as one performs, is partly calculated in user
Its upper side point is performed or performed completely in remote computing device or server on a remote computing.It is remote being related to
In the situation of journey computing device, remote computing device can pass through the network of any kind --- including LAN (LAN) or wide
Domain net (WAN)-be connected to user calculating equipment, or, it may be connected to external computing device (such as utilize Internet service
Provider passes through Internet connection).
It should be noted that although being referred to some units or subelement of device in above-detailed, but this stroke
Point it is merely exemplary not enforceable.In fact, according to the embodiment of the present invention, it is above-described two or more
The feature and function of unit can embody in a unit.Conversely, the feature and function of an above-described unit can
To be further divided into being embodied by multiple units.
In addition, although in the accompanying drawings with the operation of particular order the invention has been described method, still, this do not require that or
Hint must perform these operations according to the particular order, or the operation having to carry out shown in whole could realize it is desired
As a result.Additionally or alternatively, it is convenient to omit multiple steps are merged into a step and performed by some steps, and/or by one
Step is decomposed into execution of multiple steps.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program
Product.Therefore, the present invention can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Apply the form of example.Moreover, the present invention can use the computer for wherein including computer usable program code in one or more
The computer program production that usable storage medium is implemented on (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or square frame in journey and/or square frame and flowchart and/or the block diagram.These computer programs can be provided
The processors of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce
A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices, which produces, to be used in fact
The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or
The instruction performed on other programmable devices is provided and is used for realization in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a square frame or multiple square frames.
Although preferred embodiments of the present invention have been described, but those skilled in the art once know basic creation
Property concept, then can make these embodiments other change and modification.So appended claims be intended to be construed to include it is excellent
Select embodiment and fall into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
God and scope.In this way, if these modifications and changes of the present invention belongs to the scope of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to comprising including these modification and variations.
Claims (10)
- A kind of 1. spectacle-frame removing method in image, it is characterised in that including:Utilize the eye areas in default eye detection algorithm detection image;The eye areas detected is amplified to obtain lens area;Identify the spectacle-frame region in the lens area;The spectacle-frame region that the average filling for including pixel using the perimeter in the spectacle-frame region identifies.
- 2. the method as described in claim 1, it is characterised in that identify the spectacle-frame region in the lens area, specific bag Include:The edge of the lens area is detected using Canny edge detection operators;Calculate the length at each edge detected;According to 4 edges of sequential selection from long to short;Expansion process is carried out respectively to the edge selected and corrosion treatment obtains spectacle-frame region.
- 3. method as claimed in claim 2, it is characterised in that calculate the length at each edge detected, specifically include:Count the quantity of the contiguous pixels included in each edge;For each edge detected, the quantity for determining the contiguous pixels that the edge includes is the length at the edge.
- 4. method as claimed in claim 2, it is characterised in that include pixel using the peripheral region in the spectacle-frame region The spectacle-frame region identified of average filling, specifically include:The each pixel included for the spectacle-frame region, determines the corresponding neighborhood of the pixel;AndThe average for including pixel using neighborhood fills the pixel.
- A kind of 5. spectacle-frame cancellation element in image, it is characterised in that including:First detection unit, for utilizing the eye areas in default eye detection algorithm detection image;Amplifying unit, for amplifying to obtain lens area to the eye areas detected;Recognition unit, for identifying the spectacle-frame region in the lens area;Fills unit, the glasses that the average filling for including pixel using the perimeter in the spectacle-frame region identifies Frame region.
- 6. device as claimed in claim 5, it is characterised in that the recognition unit, including:Detection sub-unit, for detecting the edge of the lens area using Canny edge detection operators;Computation subunit, for calculating the length at each edge detected;Subelement is selected, for according to 4 edges of sequential selection from long to short;Subelement is handled, the edge for selecting carries out expansion process respectively and corrosion treatment obtains spectacle-frame region.
- 7. device as claimed in claim 6, it is characterised in thatThe computation subunit, for counting the quantity of the contiguous pixels included in each edge;For the every one side detected Edge, the quantity for determining the contiguous pixels that the edge includes are the length at the edge.
- 8. device as claimed in claim 7, it is characterised in thatThe fills unit, specifically for each pixel included for the spectacle-frame region, determines the corresponding neighbour of the pixel Domain;And fill the pixel using the average that neighborhood includes pixel.
- A kind of 9. computing device, it is characterised in that including at least one processing unit and at least one storage unit, wherein, The storage unit is stored with computer program, when described program is performed by the processing unit so that the processing unit Perform claim requires the step of 1~4 any claim the method.
- A kind of 10. computer-readable medium, it is characterised in that it is stored with the computer program that can be performed by computing device, when When described program is run on the computing device so that the computing device perform claim requires the step of 1~4 any the method Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711158849.1A CN107945126B (en) | 2017-11-20 | 2017-11-20 | Method, device and medium for eliminating spectacle frame in image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711158849.1A CN107945126B (en) | 2017-11-20 | 2017-11-20 | Method, device and medium for eliminating spectacle frame in image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107945126A true CN107945126A (en) | 2018-04-20 |
CN107945126B CN107945126B (en) | 2022-02-18 |
Family
ID=61930300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711158849.1A Active CN107945126B (en) | 2017-11-20 | 2017-11-20 | Method, device and medium for eliminating spectacle frame in image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107945126B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109741406A (en) * | 2019-01-03 | 2019-05-10 | 广州广电银通金融电子科技有限公司 | A kind of body color recognition methods under monitoring scene |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163289A (en) * | 2011-04-06 | 2011-08-24 | 北京中星微电子有限公司 | Method and device for removing glasses from human face image, and method and device for wearing glasses in human face image |
CN102778211A (en) * | 2012-07-13 | 2012-11-14 | 东华大学 | Irregular-shaped spectacle frame superficial area detection device and method |
CN103020579A (en) * | 2011-09-22 | 2013-04-03 | 上海银晨智能识别科技有限公司 | Face recognition method and system, and removing method and device for glasses frame in face image |
CN104268523A (en) * | 2014-09-24 | 2015-01-07 | 上海洪剑智能科技有限公司 | Small-sample-based method for removing glasses frame in face image |
CN104408426A (en) * | 2014-11-27 | 2015-03-11 | 小米科技有限责任公司 | Method and device for removing glasses in face image |
CN105046250A (en) * | 2015-09-06 | 2015-11-11 | 广州广电运通金融电子股份有限公司 | Glasses elimination method for face recognition |
CN105513045A (en) * | 2015-11-20 | 2016-04-20 | 小米科技有限责任公司 | Image processing method, device and terminal |
CN105898322A (en) * | 2015-07-24 | 2016-08-24 | 乐视云计算有限公司 | Video watermark removing method and device |
CN106067016A (en) * | 2016-07-20 | 2016-11-02 | 深圳市飘飘宝贝有限公司 | A kind of facial image eyeglass detection method and device |
CN106503611A (en) * | 2016-09-09 | 2017-03-15 | 西安理工大学 | Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam |
CN106503644A (en) * | 2016-10-19 | 2017-03-15 | 西安理工大学 | Glasses attribute detection method based on edge projection and color characteristic |
CN107025628A (en) * | 2017-04-26 | 2017-08-08 | 广州帕克西软件开发有限公司 | A kind of virtual try-in method of 2.5D glasses and device |
-
2017
- 2017-11-20 CN CN201711158849.1A patent/CN107945126B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163289A (en) * | 2011-04-06 | 2011-08-24 | 北京中星微电子有限公司 | Method and device for removing glasses from human face image, and method and device for wearing glasses in human face image |
CN103020579A (en) * | 2011-09-22 | 2013-04-03 | 上海银晨智能识别科技有限公司 | Face recognition method and system, and removing method and device for glasses frame in face image |
CN102778211A (en) * | 2012-07-13 | 2012-11-14 | 东华大学 | Irregular-shaped spectacle frame superficial area detection device and method |
CN104268523A (en) * | 2014-09-24 | 2015-01-07 | 上海洪剑智能科技有限公司 | Small-sample-based method for removing glasses frame in face image |
CN104408426A (en) * | 2014-11-27 | 2015-03-11 | 小米科技有限责任公司 | Method and device for removing glasses in face image |
CN105898322A (en) * | 2015-07-24 | 2016-08-24 | 乐视云计算有限公司 | Video watermark removing method and device |
CN105046250A (en) * | 2015-09-06 | 2015-11-11 | 广州广电运通金融电子股份有限公司 | Glasses elimination method for face recognition |
CN105513045A (en) * | 2015-11-20 | 2016-04-20 | 小米科技有限责任公司 | Image processing method, device and terminal |
CN106067016A (en) * | 2016-07-20 | 2016-11-02 | 深圳市飘飘宝贝有限公司 | A kind of facial image eyeglass detection method and device |
CN106503611A (en) * | 2016-09-09 | 2017-03-15 | 西安理工大学 | Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam |
CN106503644A (en) * | 2016-10-19 | 2017-03-15 | 西安理工大学 | Glasses attribute detection method based on edge projection and color characteristic |
CN107025628A (en) * | 2017-04-26 | 2017-08-08 | 广州帕克西软件开发有限公司 | A kind of virtual try-in method of 2.5D glasses and device |
Non-Patent Citations (3)
Title |
---|
郭沛: "人脸图像中的眼镜去除及区域复原", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
陈文青等: "人脸图像中眼镜检测与边框去除方法", 《HTTPS://NXGP.CNKI.NET/KCMS/DETAIL/11.2127.TP.20160120.1503.016.HTML》 * |
陈跃等: "基于图像像素跟踪法的钢板表面裂纹检测研究", 《煤矿机电》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109741406A (en) * | 2019-01-03 | 2019-05-10 | 广州广电银通金融电子科技有限公司 | A kind of body color recognition methods under monitoring scene |
Also Published As
Publication number | Publication date |
---|---|
CN107945126B (en) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110874594B (en) | Human body appearance damage detection method and related equipment based on semantic segmentation network | |
EP2908287B1 (en) | Image segmentation device, image segmentation method, and depth map generating method | |
KR20210082234A (en) | Image processing method and apparatus, electronic device and storage medium | |
JP5671533B2 (en) | Perspective and parallax adjustment in stereoscopic image pairs | |
US9129435B2 (en) | Method for creating 3-D models by stitching multiple partial 3-D models | |
US20220383661A1 (en) | Method and device for retinal image recognition, electronic equipment, and storage medium | |
CN112348765A (en) | Data enhancement method and device, computer readable storage medium and terminal equipment | |
CN108734052A (en) | character detecting method, device and system | |
CN107622504B (en) | Method and device for processing pictures | |
CN110136153A (en) | A kind of image processing method, equipment and storage medium | |
EP3699808A1 (en) | Facial image detection method and terminal device | |
EP3089107B1 (en) | Computer program product and method for determining lesion similarity of medical image | |
CN111738080A (en) | Face detection and alignment method and device | |
CN114998320A (en) | Method, system, electronic device and storage medium for visual saliency detection | |
US9679219B2 (en) | Image feature classification | |
CN112966687B (en) | Image segmentation model training method and device and communication equipment | |
CN107622241A (en) | Display methods and device for mobile device | |
CN107945126A (en) | Spectacle-frame removing method, device and medium in a kind of image | |
CN110136140A (en) | Eye fundus image blood vessel image dividing method and equipment | |
CN116563172B (en) | VR globalization online education interaction optimization enhancement method and device | |
JP6819445B2 (en) | Information processing equipment, control methods, and programs | |
CN110390344A (en) | Alternative frame update method and device | |
CN109816791B (en) | Method and apparatus for generating information | |
EP2608152A1 (en) | Medical imaging diagnosis apparatus and medical imaging diagnosis method for providing diagnostic basis | |
JP2016045837A (en) | Information processing apparatus, image determination method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |