CN108257082B - Method and device for removing image fingers based on fixed area - Google Patents

Method and device for removing image fingers based on fixed area Download PDF

Info

Publication number
CN108257082B
CN108257082B CN201810100143.8A CN201810100143A CN108257082B CN 108257082 B CN108257082 B CN 108257082B CN 201810100143 A CN201810100143 A CN 201810100143A CN 108257082 B CN108257082 B CN 108257082B
Authority
CN
China
Prior art keywords
image
rect
area
sub
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810100143.8A
Other languages
Chinese (zh)
Other versions
CN108257082A (en
Inventor
范国强
张龙彬
何佳文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Viisan Technology Co ltd
Original Assignee
Beijing Viisan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Viisan Technology Co ltd filed Critical Beijing Viisan Technology Co ltd
Priority to CN201810100143.8A priority Critical patent/CN108257082B/en
Publication of CN108257082A publication Critical patent/CN108257082A/en
Application granted granted Critical
Publication of CN108257082B publication Critical patent/CN108257082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04

Abstract

The invention provides a method and a device for removing image fingers based on a fixed area, wherein the method comprises the following steps: acquiring a Rect area image of a preset area in an image; splitting the Rect region image into an upper Rect region sub-image and a lower Rect region sub-image; filling the image of the upper edge area of the Rect area sub-image as an original image; filling the lower edge area image of the lower Rect area sub-image as an original image; and performing color penetration processing on the boundary of the upper Rect area sub-image and the lower Rect area sub-image. The method comprises the steps of firstly carrying out frame selection limitation on the position of a finger, then carrying out projection filling in the frame selection range, and carrying out color penetration on the spliced positions of the upper projection and the lower projection, thereby realizing the smooth processing of the spliced positions. By adopting the scheme, no matter what environment the user collects the image and is not pressed by the finger with fixed skin color, the mapping and filling in the frame selection area can be realized, and the universality and the accuracy of finger removal are realized.

Description

Method and device for removing image fingers based on fixed area
Technical Field
The invention relates to a method for removing foreign matters in a computer image, in particular to a method and a device for removing fingers from a fixed area in a document image shot by a high-speed camera.
Background
The shot object in the image acquisition process often needs to be pressed by fingers to ensure complete imaging, and the method is particularly applied to the process of scanning documents and books by a high-speed camera and other non-contact scanning equipment. Fingers in the image need to be removed, and the image is guaranteed to be clean.
The method used at present is to remove fingers based on a skin color detection algorithm and judge colors in an image according to preset threshold values in color spaces RGB and HSV. The method has certain limitation, and different differences exist among individual skin colors, and the similar colors of the skin colors in the image cannot be effectively distinguished. If the imaging equipment is sensitive to the light source, the imaging of different light sources can be different, the imaging quality of the existing high-speed camera is poor under certain conditions, and the skin color detection of the finger can be misjudged at the moment. The method defaults to a skin color, and under the ideal conditions that the image has no skin color close color and the image imaging is stable, the finger skin color detection is misjudged under other conditions, so that the finger in the image cannot be removed.
Disclosure of Invention
The invention aims to provide a method and a device for removing fingers from an image based on a fixed area, which are used for solving the problem that the fingers in the image cannot be removed due to the fact that the fingers in the image are easy to detect wrongly at present.
To solve the above technical problem, as an aspect of the present invention, there is provided a method of removing an image finger based on a fixed area, including:
acquiring a Rect area image of a preset area in an image;
splitting the Rect region image into an upper Rect region sub-image and a lower Rect region sub-image; taking an image of a predetermined area at the upper edge of the upper Rect area sub-image as an upper original image, and mapping the upper original image to fill the upper Rect area sub-image; taking the image of the preset area at the lower edge of the lower Rect area sub-image as a lower original image, and mapping and filling the lower original image into the lower Rect area sub-image;
and performing color penetration processing on the junction of the upper Rect area sub-image and the lower Rect area sub-image.
Further, the Rect region image is split into an upper Rect region sub-image and a lower Rect region sub-image; in the step (2), comprising:
dividing the Rect area image into an upper Rect area sub-image and a lower Rect area sub-image of upper and lower equal-size areas according to a middle L line;
taking an image of a predetermined area at the upper edge of the upper Rect area sub-image as an upper original image, and mapping and filling the upper original image into the upper Rect area sub-image; in the step (2), comprising:
determining an upper edge L1 line of the sub-image of the upper Rect region, selecting an image of an area with a height of N1 on the upper edge L1 line as an upper original image, and mapping and filling the upper original image into the sub-image of the upper Rect region by taking the upper edge L1 line as a mapping boundary line;
if the distance between the upper edge L1 line and the middle L line is smaller than or equal to N1, filling an area in the upper original image, which corresponds to the size of the upper Rect area sub-image, into the upper Rect area sub-image in a mapping mode;
and if the distance between the upper edge L1 line and the middle L line is greater than N1, filling the upper original image into the upper Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the upper Rect area sub-image.
Further, the image of the lower edge predetermined area of the lower Rect area sub-image is used as a lower original image, and the lower original image is mapped and filled in the lower Rect area sub-image; in the step (2), comprising:
determining a lower edge L2 line of the sub-image of the lower Rect region, selecting an image of an area with a height N1 below the lower edge L1 line as a lower original image, and mapping and filling the lower original image into the sub-image of the lower Rect region by taking the lower edge L2 line as a mapping boundary line;
if the distance between the lower edge L2 line and the middle L line is smaller than or equal to N1, filling an area in the lower original image, which corresponds to the size of the lower Rect area sub-image, into the lower Rect area sub-image in a mapping mode;
and if the distance between the lower edge L1 line and the middle L line is greater than N1, filling the lower original image into the lower Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the lower Rect area sub-image.
Further, the color-bleed processing the boundary between the upper Rect area sub-image and the lower Rect area sub-image includes:
setting the weight of the original filling color value of Rect1 as a weight Q1, and the weight of the color value of the pixel extended upwards by Rect2 as a weight Q2; the sum of the weight Q1 and the weight Q2 is always kept at 1, the weight Q1 and the weight Q2 are both 0.5 at the middle L line, and the transformation between the weight Q1 and the weight Q2 can be calculated by the following formula:
Q1=0.5+F*i,
Q2=0.5-F*i,
where i is the distance from the middle L-line, in pixels, and F is the magnitude of the weight change, which can be calculated by the following formula:
F=0.5/N,
wherein N is the number of extension pixels;
the new color value of the transition region at the image boundary is calculated by the following formula:
Value(i,j)=Q1*Value1(i,j)+Q2*Value2(i,j);
the sum of the weight Q1 and the weight Q2 is equal to 1, Value (i, j) is a new color Value at the pixel point position (i, j), Value1 is a Value filled by partial projection of the sub-image in the last Rect region, and Q1 is a corresponding weight Value; value2 is the Value of the lower Rect area sub-image partial projection fill, with Q2 bits corresponding to the weight Value.
Further, in the above-mentioned case,
the predetermined area is an image containing a finger area.
As another aspect of the present invention, the present invention also provides an apparatus for removing an imaged finger based on a fixed area, comprising:
an acquisition unit configured to acquire a Rect region image of a predetermined region in an image;
the splitting and mapping unit is configured to split the Rect region image into an upper Rect region sub-image and a lower Rect region sub-image; taking an image of a predetermined area at the upper edge of the upper Rect area sub-image as an upper original image, and mapping the upper original image to fill the upper Rect area sub-image; taking the image of the preset area at the lower edge of the lower Rect area sub-image as a lower original image, and mapping and filling the lower original image into the lower Rect area sub-image;
and the permeation unit is used for carrying out color permeation treatment on the junction of the upper Rect area sub-image and the lower Rect area sub-image.
Further, the split mapping unit includes:
dividing the Rect area image into an upper Rect area sub-image and a lower Rect area sub-image of upper and lower equal-size areas according to a middle L line;
determining an upper edge L1 line of the sub-image of the upper Rect region, selecting an image of an area with a height of N1 on the upper edge L1 line as an upper original image, and mapping and filling the upper original image into the sub-image of the upper Rect region by taking the upper edge L1 line as a mapping boundary line;
if the distance between the upper edge L1 line and the middle L line is smaller than or equal to N1, filling an area in the upper original image, which corresponds to the size of the upper Rect area sub-image, into the upper Rect area sub-image in a mapping mode;
and if the distance between the upper edge L1 line and the middle L line is greater than N1, filling the upper original image into the upper Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the upper Rect area sub-image.
Further, the split mapping unit further includes:
determining a lower edge L2 line of the sub-image of the lower Rect region, selecting an image of an area with a height N1 below the lower edge L1 line as a lower original image, and mapping and filling the lower original image into the sub-image of the lower Rect region by taking the lower edge L2 line as a mapping boundary line;
if the distance between the lower edge L2 line and the middle L line is smaller than or equal to N1, filling an area in the lower original image, which corresponds to the size of the lower Rect area sub-image, into the lower Rect area sub-image in a mapping mode;
and if the distance between the lower edge L1 line and the middle L line is greater than N1, filling the lower original image into the lower Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the lower Rect area sub-image.
Further, the infiltration unit includes:
setting the weight of the original filling color value of Rect1 as a weight Q1, and the weight of the color value of the pixel extended upwards by Rect2 as a weight Q2; the sum of the weight Q1 and the weight Q2 is always kept at 1, the weight Q1 and the weight Q2 are both 0.5 at the middle L line, and the transformation between the weight Q1 and the weight Q2 can be calculated by the following formula:
Q1=0.5+F*i,
Q2=0.5-F*i,
where i is the distance from the middle L-line, in pixels, and F is the magnitude of the weight change, which can be calculated by the following formula:
F=0.5/N,
wherein N is the number of extension pixels;
the new color value of the transition region at the image boundary is calculated by the following formula:
Value(i,j)=Q1*Value1(i,j)+Q2*Value2(i,j);
the sum of the weight Q1 and the weight Q2 is equal to 1, Value (i, j) is a new color Value at the pixel point position (i, j), Value1 is a Value filled by partial projection of the sub-image in the last Rect region, and Q1 is a corresponding weight Value; value2 is the Value of the lower Rect area sub-image partial projection fill, with Q2 bits corresponding to the weight Value.
Further, in the above-mentioned case,
the predetermined area is an image containing a finger area.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a method for removing image fingers based on a fixed area, which comprises the following steps: acquiring a Rect area image of a preset area in an image; splitting the Rect region image into an upper Rect region sub-image and a lower Rect region sub-image; filling the upper Rect area sub-image by taking the image of the upper edge area of the upper Rect area sub-image as an original image; filling the lower Rect area sub-image by taking the image of the lower edge area of the lower Rect area sub-image as an original image; and performing color penetration processing on the junction of the upper Rect area sub-image and the lower Rect area sub-image. The method comprises the steps of firstly carrying out frame selection limitation on the position of a finger, then carrying out projection filling in the frame selection range, and carrying out color penetration on the spliced positions of the upper projection and the lower projection, thereby realizing the smooth processing of the spliced positions. By adopting the scheme, no matter what environment the user collects the image and is not pressed by the finger with fixed skin color, the mapping and filling in the frame selection area can be realized, and the universality and the accuracy of finger removal are realized.
Drawings
FIG. 1 is a flow chart of a method for removing an image finger based on a fixed area according to an embodiment of the invention;
FIG. 2 is a schematic diagram illustrating edge division of an image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a map-fill flow of an embodiment of the invention;
FIG. 4 is a schematic diagram of a device for removing an image finger based on a fixed area according to an embodiment of the present invention:
Detailed Description
The following detailed description of embodiments of the invention, but the invention can be practiced in many different ways, as defined and covered by the claims.
As shown in fig. 1, an embodiment of the present invention provides a method for removing an image finger based on a fixed region, which includes the following specific processes:
step 10, collecting a Rect area image of a preset area in an image;
step 20, splitting the Rect area image into an upper Rect area sub-image and a lower Rect area sub-image; taking the image of the predetermined area of the upper edge of the Rect area sub-image as an upper original image, and mapping and filling the upper original image into the Rect area sub-image; taking an image in a preset area of the lower edge of the lower Rect area sub-image as a lower original image, and mapping and filling the lower original image into the lower Rect area sub-image;
and step 30, performing color penetration processing on the junction of the upper Rect area sub-image and the lower Rect area sub-image.
In step 20, the horizontal middle line of the Rect area image is divided into two upper Rect area sub-images and two lower Rect area sub-images. Respectively adopting the upper parts of the upper edge L1 lines in the upper Rect area sub-images to map and fill the upper Rect area sub-images; the lower part of the lower edge L2 line in the lower Rect area sub-image is mapped and filled in the lower Rect area sub-image. The method is equivalent to splicing the upper Rect area sub-image and the lower Rect area sub-image into a Rect area image.
The method comprises the steps of firstly carrying out frame selection limitation on the position of a finger, then carrying out projection filling in the frame selection range, and carrying out color penetration on the spliced positions of the upper projection and the lower projection, thereby realizing the smooth processing of the spliced positions. By adopting the scheme, no matter what environment the user collects the image and is not pressed by the finger with fixed skin color, the mapping and filling in the frame selection area can be realized, and the universality and the accuracy of finger removal are realized.
In step 10, the predetermined area is an image including a finger area.
As shown in fig. 2 and 3, step 20 specifically includes:
step 21, dividing the Rect area into an upper Rect area sub-image and a lower Rect area sub-image of upper and lower equal-size areas according to a middle L line;
the middle L line in the vertical direction of the Rect area image divides the Rect area image into two parts, namely an upper Rect area sub-image and a lower Rect area sub-image.
Step 22, determining an upper edge L1 line of the sub-image of the upper Rect area, selecting an image of an area with a height of N1 on the upper edge L1 line as an upper original image, and mapping and filling the upper original image into the sub-image of the upper Rect area by taking the upper edge L1 line as a mapping boundary line;
step 23, determining a lower edge L2 line of the sub-image of the lower Rect area, selecting an image of an area with a height of N1 below the lower edge L1 line as a lower original image, and mapping and filling the lower original image into the sub-image of the lower Rect area by taking the lower edge L2 line as a mapping boundary line;
as shown in fig. 2, an upper Rect area sub-image filling range is determined, the upper Rect area sub-image (Rect1) is filled between the upper edge line L1 and L + N, a lower Rect area sub-image (Rect2) filling range is determined, the lower Rect area sub-image is filled between the lower edge line L2 and L-N, and the upper Rect area sub-image and the lower Rect area sub-image respectively have an overlapping area with a height of N.
In order to ensure the continuity of the final mapping fill on the image, it is necessary to set the range of mappable fills, as shown in fig. 2, both the range of the distance N1 on the side of L1 and the range of the distance N1 on the side of L2 are mappable. Preferably 100 pixels apart.
Step 24, if the distance between the upper edge line L1 and the middle line L is less than or equal to N1, mapping and filling an area in the upper original image, which has a size corresponding to the size of the upper Rect area sub-image, into the upper Rect area sub-image; and if the distance between the upper edge L1 line and the middle L line is larger than N1, filling the upper original image into the upper Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the upper Rect area sub-image.
Step 25, if the distance between the lower edge line L2 and the middle line L is less than or equal to N1, mapping and filling an area in the lower original image, which has a size corresponding to the size of the lower Rect area sub-image, into the lower Rect area sub-image; and if the distance between the lower edge L1 line and the middle L line is greater than N1, filling the lower original image into the lower Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the lower Rect area sub-image.
The step of mapping the continuous segments into the upper Rect region sub-image is specifically that after the upper edge line is mapped into the upper Rect region sub-image as a symmetry axis, the lower boundary line which is just mapped in the upper Rect region sub-image is continuously mapped downwards symmetrically by taking the lower boundary line as the symmetry axis, and the continuous smoothness of the junction of the upper Rect region sub-image is ensured until all the mapping in the upper Rect region sub-image is finished.
The method for filling the sub-image of the lower Rect area is basically the same as the method for filling the sub-image of the upper Rect area, the image in the filling area is ensured to be in smooth transition, and the step of mapping the lower Rect area into the sub-image of the lower Rect area in a continuous segmentation mode is that after the lower edge line is mapped into the sub-image of the lower Rect area by taking the lower edge line as a symmetrical axis, the upper boundary line which is just mapped in the sub-image of the lower Rect area is continuously mapped upwards by taking the upper boundary line as the symmetrical axis, and the junction in the sub-image of the lower Rect area is ensured to be continuously smooth until all the sub-images in the lower Rect area are mapped.
As shown in fig. 2, the image between L1 and L is filled at N1 positions above the line L1 of the upper edge, with the line L1 of the upper edge serving as a boundary.
If the distance between L1-L is less than or equal to N1, the region between L1-L can be map-filled according to L1 symmetry on the original image.
If the distance between L1-L is larger than N1, then segment mapping padding is needed. D is the distance of the fill location from L2.
And D < ═ N1, adopting a symmetrical mapping filling mode. N1< D <2 × N1, where the mapping position has reached the limit position where mapping is possible, and if the mapping and filling are continued, the original image to be mapped needs to be inverted, so that smooth transition at the inverted position of the image is ensured, but the final image has an obvious stripe texture, preferably N1 is 100 pixels, so that the obvious stripe texture can be better avoided.
And filling the lower Rect region sub-image according to the principle.
Step 30 specifically comprises:
at this time, the act area image mapping and filling is equivalent to that the upper half part of the line L1 of the upper edge of the upper act area sub-image and the lower half part of the line L2 of the lower edge of the lower act area sub-image are spliced, and generally, colors of the spliced part jump obviously. At this time, the splicing part needs to be smoothed, and the intermediate L-line position is respectively filled with the upper-N pixel-extended Rect region sub-images and the lower-N pixel-extended Rect region sub-images, where N is preferably 20 pixels and is within the range of the Rect region image. And two parts of mapping color values exist in the extended range, and different weights are adopted to add to obtain a new color value. The method comprises the following specific steps:
taking the overlapping area of 20 pixels upward from the middle L line as an example, the original filling color value weight of Rect1 is set as a weight Q1, and the color value weight of a pixel extended upward by Rect2 is set as a weight Q2; the sum of the weight Q1 and the weight Q2 is always kept at 1, the weight Q1 and the weight Q2 are both 0.5 at the middle L line, and the transformation of the weight Q1 and the weight Q2 can be calculated by the following formula:
Q1=0.5+F*i;
Q2=0.5-F*i;
where i is the distance from the middle L-line, in pixels, and F is the magnitude of the weight change, which can be calculated by the following formula:
F=0.5/N;
wherein N is the number of extension pixels;
the new color value of the transition region at the image boundary is calculated by the following formula:
Value(i,j)=Q1*Value1(i,j)+Q2*Value2(i,j);
the sum of the weight Q1 and the weight Q2 is equal to 1, Value (i, j) is a new color Value at the pixel point position (i, j), Value1 is a Value filled by partial projection of the sub-image in the last Rect region, and Q1 is a corresponding weight Value; value2 is the Value of the lower Rect area sub-image partial projection fill, with Q2 bits corresponding to the weight Value.
According to another aspect of the embodiments of the present invention, the present invention further provides a device for removing an image finger based on a fixed area, as shown in fig. 4, including:
an acquisition unit configured to acquire a Rect region image of a predetermined region in an image;
the split mapping unit is configured to split the Rect region image into an upper Rect region sub-image and a lower Rect region sub-image; taking the image of the predetermined area of the upper edge of the Rect area sub-image as an upper original image, and mapping and filling the upper original image into the Rect area sub-image; taking an image in a preset area of the lower edge of the lower Rect area sub-image as a lower original image, and mapping and filling the lower original image into the lower Rect area sub-image;
and the infiltration unit is used for carrying out color infiltration processing on the junction of the upper Rect area sub-image and the lower Rect area sub-image.
The split mapping unit includes:
dividing the Rect area into an upper Rect area sub-image and a lower Rect area sub-image of upper and lower equal-size areas according to a middle L line;
determining an upper edge L1 line of the sub-image of the upper Rect area, selecting an image of an area with the height of N1 on an upper edge L1 line as an upper original image, and mapping and filling the upper original image into the sub-image of the upper Rect area by taking an upper edge L1 line as a mapping boundary line;
if the distance between the upper edge line L1 and the middle line L is smaller than or equal to N1, mapping and filling an area with the size corresponding to the upper Rect area sub-image in the upper original image into the upper Rect area sub-image;
and if the distance between the upper edge L1 line and the middle L line is larger than N1, filling the upper original image into the upper Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the upper Rect area sub-image.
The split mapping unit further comprises:
determining a lower edge L2 line of the sub-image of the lower Rect region, selecting an image of an area with the height of N1 under the line of a lower edge L1 as a lower original image, and mapping and filling the lower original image into the sub-image of the lower Rect region by taking the line of the lower edge L2 as a mapping boundary line;
if the distance between the lower edge line L2 and the middle line L is smaller than or equal to N1, mapping and filling an area with the size corresponding to the size of the lower Rect area sub-image in the lower original image into the lower Rect area sub-image;
and if the distance between the lower edge L1 line and the middle L line is greater than N1, filling the lower original image into the lower Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the lower Rect area sub-image.
The infiltration unit includes:
setting the weight of the original filling color value of Rect1 as a weight Q1, and the weight of the color value of the pixel extended upwards by Rect2 as a weight Q2; the sum of the weight Q1 and the weight Q2 is always kept at 1, the weight Q1 and the weight Q2 are both 0.5 at the middle L line, and the transformation of the weight Q1 and the weight Q2 can be calculated by the following formula:
Q1=0.5+F*i,
Q2=0.5-F*i,
where i is the distance from the middle L-line, in pixels, and F is the magnitude of the weight change, which can be calculated by the following formula:
F=0.5/N,
wherein N is the number of extension pixels;
the new color value of the transition region at the image boundary is calculated by the following formula:
Value(i,j)=Q1*Value1(i,j)+Q2*Value2(i,j);
the sum of the weight Q1 and the weight Q2 is equal to 1, Value (i, j) is a new color Value at the pixel point position (i, j), Value1 is a Value filled by partial projection of the sub-image in the last Rect region, and Q1 is a corresponding weight Value; value2 is the Value of the lower Rect area sub-image partial projection fill, with Q2 bits corresponding to the weight Value.
The acquisition unit includes:
the predetermined area is an image containing a finger area.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A method for removing an image finger based on a fixed area, comprising:
acquiring a Rect area image of a preset area in an image;
splitting the Rect region image into an upper Rect region sub-image and a lower Rect region sub-image; taking an image of a predetermined area at the upper edge of the upper Rect area sub-image as an upper original image, and mapping the upper original image to fill the upper Rect area sub-image; taking the image of the preset area at the lower edge of the lower Rect area sub-image as a lower original image, and mapping and filling the lower original image into the lower Rect area sub-image;
color permeation processing the junction of the upper Rect area sub-image and the lower Rect area sub-image; the step of splitting the Rect region image into an upper Rect region sub-image and a lower Rect region sub-image comprises the following steps:
dividing the Rect area image into an upper Rect area sub-image and a lower Rect area sub-image of upper and lower equal-size areas according to a middle L line;
the step of taking the image of the predetermined area of the upper edge of the upper Rect area sub-image as the upper original image and mapping and filling the upper original image into the upper Rect area sub-image includes:
determining an upper edge L1 line of the sub-image of the upper Rect region, selecting an image of an area with a height of N1 on the upper edge L1 line as an upper original image, and mapping and filling the upper original image into the sub-image of the upper Rect region by taking the upper edge L1 line as a mapping boundary line;
if the distance between the upper edge L1 line and the middle L line is smaller than or equal to N1, filling an area in the upper original image, which corresponds to the size of the upper Rect area sub-image, into the upper Rect area sub-image in a mapping mode;
if the distance between the upper edge line L1 and the middle line L is larger than N1, the upper original image is filled into the upper Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the upper Rect area sub-image; the step of using the image of the predetermined area of the lower edge of the lower Rect area sub-image as the lower original image and mapping and filling the lower original image into the lower Rect area sub-image includes:
determining a lower edge L2 line of the sub-image of the lower Rect region, selecting an image of an area with a height N1 below the lower edge L2 line as a lower original image, and mapping and filling the lower original image into the sub-image of the lower Rect region by taking the lower edge L2 line as a mapping boundary line;
if the distance between the lower edge L2 line and the middle L line is smaller than or equal to N1, filling an area in the lower original image, which corresponds to the size of the lower Rect area sub-image, into the lower Rect area sub-image in a mapping mode;
if the distance between the lower edge L2 line and the middle L line is larger than N1, filling the lower original image into the lower Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the lower Rect area sub-image; the step of color-bleed processing the junction of the upper Rect region sub-image and the lower Rect region sub-image comprises:
setting the weight of the original filling color value of Rect1 as a weight Q1, and the weight of the color value of the pixel extended upwards by Rect2 as a weight Q2; the sum of the weight Q1 and the weight Q2 is always kept at 1, the weight Q1 and the weight Q2 are both 0.5 at the middle L line, and the transformation of the weight Q1 and the weight Q2 is calculated by the following formula:
Q1=0.5+F*i,
Q2=0.5-F*i,
where i is the distance from the middle L-line, in pixels, and F is the magnitude of the weight change, and F is calculated by the following formula:
F=0.5/N,
wherein N is the number of extension pixels;
the new color value of the transition region at the image boundary is calculated by the following formula:
Value(i,j)=Q1*Value1(i,j)+Q2*Value2(i,j);
the sum of the weight Q1 and the weight Q2 is equal to 1, Value (i, j) is a new color Value at the pixel point position (i, j), Value1 is a Value filled by partial projection of the sub-image in the last Rect region, and Q1 is a corresponding weight Value; value2 is the Value of the lower Rect area sub-image partial projection fill, with Q2 bits corresponding to the weight Value.
2. The method for removing image fingers based on fixed area according to claim 1,
the predetermined area is an image containing a finger area.
3. A device for removing an imaged finger based on a fixed area, comprising:
an acquisition unit configured to acquire a Rect region image of a predetermined region in an image;
the splitting and mapping unit is configured to split the Rect region image into an upper Rect region sub-image and a lower Rect region sub-image; taking an image of a predetermined area at the upper edge of the upper Rect area sub-image as an upper original image, and mapping the upper original image to fill the upper Rect area sub-image; taking the image of the preset area at the lower edge of the lower Rect area sub-image as a lower original image, and mapping and filling the lower original image into the lower Rect area sub-image;
the infiltration unit is used for carrying out color infiltration processing on the junction of the upper Rect area sub-image and the lower Rect area sub-image; the split mapping unit includes:
dividing the Rect area image into an upper Rect area sub-image and a lower Rect area sub-image of upper and lower equal-size areas according to a middle L line;
determining an upper edge L1 line of the sub-image of the upper Rect region, selecting an image of an area with a height of N1 on the upper edge L1 line as an upper original image, and mapping and filling the upper original image into the sub-image of the upper Rect region by taking the upper edge L1 line as a mapping boundary line;
if the distance between the upper edge L1 line and the middle L line is smaller than or equal to N1, filling an area in the upper original image, which corresponds to the size of the upper Rect area sub-image, into the upper Rect area sub-image in a mapping mode;
if the distance between the upper edge line L1 and the middle line L is larger than N1, the upper original image is filled into the upper Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the upper Rect area sub-image; the split mapping unit further comprises:
determining a lower edge L2 line of the sub-image of the lower Rect region, selecting an image of an area with a height N1 below the lower edge L2 line as a lower original image, and mapping and filling the lower original image into the sub-image of the lower Rect region by taking the lower edge L2 line as a mapping boundary line;
if the distance between the lower edge L2 line and the middle L line is smaller than or equal to N1, filling an area in the lower original image, which corresponds to the size of the lower Rect area sub-image, into the lower Rect area sub-image in a mapping mode;
if the distance between the lower edge L2 line and the middle L line is larger than N1, filling the lower original image into the lower Rect area sub-image according to the continuous segmentation mapping of the area with the size corresponding to the lower Rect area sub-image; the infiltration unit includes:
setting the weight of the original filling color value of Rect1 as a weight Q1, and the weight of the color value of the pixel extended upwards by Rect2 as a weight Q2; the sum of the weight Q1 and the weight Q2 is always kept at 1, the weight Q1 and the weight Q2 are both 0.5 at the middle L line, and the transformation of the weight Q1 and the weight Q2 is calculated by the following formula:
Q1=0.5+F*i,
Q2=0.5-F*i,
where i is the distance from the middle L-line, in pixels, and F is the magnitude of the weight change, and F is calculated by the following formula:
F=0.5/N,
wherein N is the number of extension pixels;
the new color value of the transition region at the image boundary is calculated by the following formula:
Value(i,j)=Q1*Value1(i,j)+Q2*Value2(i,j);
the sum of the weight Q1 and the weight Q2 is equal to 1, Value (i, j) is a new color Value at the pixel point position (i, j), Value1 is a Value filled by partial projection of the sub-image in the last Rect region, and Q1 is a corresponding weight Value; value2 is the Value of the lower Rect area sub-image partial projection fill, with Q2 bits corresponding to the weight Value.
4. The fixed-area-based finger-imaging device according to claim 3,
the predetermined area is an image containing a finger area.
CN201810100143.8A 2018-02-01 2018-02-01 Method and device for removing image fingers based on fixed area Active CN108257082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810100143.8A CN108257082B (en) 2018-02-01 2018-02-01 Method and device for removing image fingers based on fixed area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810100143.8A CN108257082B (en) 2018-02-01 2018-02-01 Method and device for removing image fingers based on fixed area

Publications (2)

Publication Number Publication Date
CN108257082A CN108257082A (en) 2018-07-06
CN108257082B true CN108257082B (en) 2021-08-17

Family

ID=62743213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810100143.8A Active CN108257082B (en) 2018-02-01 2018-02-01 Method and device for removing image fingers based on fixed area

Country Status (1)

Country Link
CN (1) CN108257082B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014846B (en) * 2019-12-19 2022-07-22 华为技术有限公司 Video acquisition control method, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886561A (en) * 2014-04-09 2014-06-25 武汉科技大学 Criminisi image inpainting method based on mathematical morphology
US8952979B2 (en) * 2012-09-19 2015-02-10 Autodesk, Inc. Wave fill
CN105898322A (en) * 2015-07-24 2016-08-24 乐视云计算有限公司 Video watermark removing method and device
CN106408952A (en) * 2016-12-14 2017-02-15 浙江工业大学 Motor vehicle violation shooting system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8952979B2 (en) * 2012-09-19 2015-02-10 Autodesk, Inc. Wave fill
CN103886561A (en) * 2014-04-09 2014-06-25 武汉科技大学 Criminisi image inpainting method based on mathematical morphology
CN105898322A (en) * 2015-07-24 2016-08-24 乐视云计算有限公司 Video watermark removing method and device
CN106408952A (en) * 2016-12-14 2017-02-15 浙江工业大学 Motor vehicle violation shooting system and method

Also Published As

Publication number Publication date
CN108257082A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
US9179035B2 (en) Method of editing static digital combined images comprising images of multiple objects
US8031941B2 (en) Image display apparatus, image display method, and image display program
US8798361B2 (en) Mapping colors of an image
KR20110103409A (en) Image segmentation
US20110200259A1 (en) Digital image manipulation
CN109829904B (en) Method and device for detecting dust on screen, electronic equipment and readable storage medium
US9064178B2 (en) Edge detection apparatus, program and method for edge detection
CN107079112B (en) Method, system and computer readable storage medium for dividing image data
JP3993029B2 (en) Makeup simulation apparatus, makeup simulation method, makeup simulation program, and recording medium recording the program
CN108573251A (en) Character area localization method and device
CN110782470B (en) Carpal bone region segmentation method based on shape information
CN104751406A (en) Method and device used for blurring image
CN103852034A (en) Elevator guide rail perpendicularity detection method
CN108257082B (en) Method and device for removing image fingers based on fixed area
US11216905B2 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
CN107909579B (en) Product profile extraction method in vision-based detection
CN109741377A (en) A kind of image difference detection method
CN105631868A (en) Depth information extraction method based on image classification
CN111401341B (en) Deceleration strip detection method and device based on vision and storage medium thereof
JP2013161348A (en) Image correction device, method, program, and recording medium
JP5051671B2 (en) Information processing apparatus, information processing method, and program
CN107563992B (en) Method and device for detecting breast skin line
WO2015080321A1 (en) Method for detecting object having excessive disparity
JP6459528B2 (en) Image correction apparatus, image correction system, image correction method, and image correction program
CN104240275A (en) Image repairing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant