CN107491763A - Finger areas dividing method and device based on depth image - Google Patents
Finger areas dividing method and device based on depth image Download PDFInfo
- Publication number
- CN107491763A CN107491763A CN201710734621.6A CN201710734621A CN107491763A CN 107491763 A CN107491763 A CN 107491763A CN 201710734621 A CN201710734621 A CN 201710734621A CN 107491763 A CN107491763 A CN 107491763A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- depth
- finger
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of finger areas dividing method and device based on depth image.Wherein, method includes:The difference operation of depth value is carried out to background depth image and depth image to be split, to obtain target depth image, the target depth image includes finger to be identified;According to depth value scope corresponding to the finger to be identified, binaryzation is carried out to the target depth image, to obtain binary image;According to the pixel characteristic of the binary image, the image-region where the finger to be identified is split from the binary image.Finger areas dividing method and device provided by the invention based on depth image, finger region to be identified, the efficiency of lifting finger manipulation identification can be divided into from image to be extracted exactly.
Description
Technical field
The present invention relates to image identification technical field, more particularly to a kind of finger areas dividing method based on depth image
And device.
Background technology
An important component of the image recognition as artificial intelligence, is increasingly being applied to various different scenes.
For example, the finger manipulation in image recognition identifies the application in gesture control scene.For another example recognition of face is in information security
Application in scene etc..
By taking the identification of finger manipulation as an example, in the prior art, opponent refers to when being identified, it will usually according to what is photographed
Image directly carries out the identification of finger manipulation.And the image photographed generally comprises much noise and substantial amounts of non-targeted identification
Region.And then cause that the data volume of the image for identification is larger, and recognition efficiency is low.
The content of the invention
The present invention provides a kind of finger areas dividing method and device based on depth image, for exactly to be extracted
Finger region to be identified is divided into image, and then lifts the efficiency of finger manipulation identification.
The present invention provides a kind of finger areas dividing method based on depth image, including:
The difference operation of depth value is carried out to background depth image and depth image to be split, to obtain target depth figure
Picture, the target depth image include finger to be identified;
According to depth value scope corresponding to the finger to be identified, binaryzation is carried out to the target depth image, with
To binary image;
According to the pixel characteristic of the binary image, the finger place to be identified is split from the binary image
Image-region.
Still optionally further, the difference operation of depth value is carried out to background depth image and depth image to be split, with
To target depth image, including:Obtain the depth value of each pixel and the depth to be split on the background depth image
Spend the depth value of each pixel on image;Should according to the coordinate pair of the background depth image and the depth image to be split
Relation, the difference of the depth value of coordinates computed identical pixel;It is raw according to the difference of the depth value of the coordinate identical pixel
Into target depth image.
Still optionally further, the depth value scope according to corresponding to the finger to be identified, enters to the target depth image
Row binaryzation, to obtain binary image, including:By in the target depth image, depth value is in the range of the depth value
The pixel value of pixel be set to 1, and the pixel value of pixel of the depth value outside the depth value scope is set to 0, with
Obtain the binary image.
Still optionally further, depth value scope described in finger to be identified is [0mm, 30mm].
Still optionally further, according to the pixel characteristic of the binary image, from the binary image described in segmentation
Image-region where finger to be identified, including:The binary image is divided into M*N image block, wherein M and N as just
Integer;For any image block P in the M*N image blockij, statistics described image block PijComprising pixel value be 1 picture
The number of vegetarian refreshments;Wherein, i ∈ [1, M], j ∈ [1, N];If described image block PijComprising pixel value be 1 pixel number
More than or equal to the points threshold value specified, it is determined that described image block PijFor effective image block;Existed according to the effective image block
Position in the binary image, the image-region where the finger to be identified is split from the binary image.
Still optionally further, the position according to the effective image block in the binary image, from the binaryzation
Split the image-region where the finger to be identified in image, including:By described image block PijIn the binary image
Lateral coordinates i and longitudinal coordinate j be added separately in lateral coordinates array and longitudinal coordinate array;According to described effective
Position of the image block in the binary image, the image where the finger to be identified is split from the binary image
Region, including:From the lateral coordinates array, maximum transversal coordinate and minimum lateral coordinate are chosen;And from the longitudinal direction
In coordinate array, maximum longitudinal coordinate and minimum longitudinal coordinate are chosen;According to the maximum transversal coordinate, the minimum lateral
Coordinate, the maximum longitudinal coordinate and the minimum longitudinal coordinate, are partitioned into described to be identified from the binary image
Image-region where finger.
Still optionally further, determine it is described specify points threshold value the step of include:The binary image is obtained to include
Pixel value be 1 pixel total number;According to corresponding to the total number of the pixel and target identification finger effectively
Points ratio, determine the points threshold value specified.
Still optionally further, the image-region where the finger to be identified is split from the binary image includes:
For the pixel that pixel value in the binary image is 1, if the pixel value of the n neighborhood territory pixels point of the pixel is not all
1, then the pixel value of the pixel is set to 0, to carry out denoising to the binary image;After denoising,
For the pixel that pixel value in the binary image is 1, the pixel value of the n neighborhood territory pixels point of the pixel is set to
1, to carry out edge smoothing processing to the finger to be identified.
The embodiment of the present invention also provides a kind of image segmenting device, including:
Image collection module, for carrying out the difference operation of depth value to background depth image and depth image to be split,
To obtain target depth image, the target depth image includes finger to be identified;
Binarization block, for the depth value scope according to corresponding to the finger to be identified, to the target depth image
Binaryzation is carried out, to obtain binary image;
Binary image splits module, for the pixel characteristic according to the binary image, from the binary image
Image-region where the middle segmentation finger to be identified.
Still optionally further, described device also includes denoising module to binary image, and the denoising module is used for:From institute
State before splitting image-region where the finger to be identified in binary image, for pixel value in the binary image
For 1 pixel, if the pixel value of the n neighborhood territory pixels point of the pixel is not all 1, the pixel value of the pixel is put
For 0, to carry out denoising to the binary image;After denoising, for pixel value in the binary image
For 1 pixel, the pixel value of the n neighborhood territory pixels point of the pixel is set to 1, to carry out side to the binary image
Boundary's smoothing processing.
Finger areas dividing method and device provided by the invention based on depth image, obtaining including finger to be identified
Target depth image after, the depth value scope with reference to corresponding to finger to be identified is partitioned into from depth image to be split to be waited to know
Image-region where other finger, precise and high efficiency, improve the efficiency of image recognition.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are this hairs
Some bright embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, can be with root
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of the provided in an embodiment of the present invention one finger areas dividing method based on depth image;
Fig. 2 a are the flow signals of another finger areas dividing method based on depth image provided in an embodiment of the present invention
Figure;
Fig. 2 b are the schematic flow sheets for the step 204 that Fig. 2 a correspond to embodiment;
Fig. 2 c are an exemplary plots of the binary image provided in an embodiment of the present invention for being divided into 7*6 image block;
Fig. 2 d are an exemplary plots of image segmentation result provided in an embodiment of the present invention;
Fig. 2 e are an exemplary plots of corrosion denoising provided in an embodiment of the present invention;
Fig. 2 f are an exemplary plots of expansion denoising provided in an embodiment of the present invention;
Fig. 3 a are the structural representations of image segmenting device provided in an embodiment of the present invention;
Fig. 3 b are the structural representations of another image segmenting device provided in an embodiment of the present invention.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
Part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Fig. 1 is the schematic flow sheet of the provided in an embodiment of the present invention one finger areas dividing method based on depth image,
With reference to Fig. 1, this method includes:
Step 101, the difference operation that depth value is carried out to background depth image and depth image to be split, to obtain target
Depth image.
Step 102, the depth value scope according to corresponding to finger to be identified, binaryzation is carried out to target depth image, with
To binary image.
Step 103, the pixel characteristic according to the binary image, split from the binary image described to be identified
Image-region where finger.
For step 101, depth image (depth image) is also referred to as range image (range image), depth map
As using the distance (depth) of image acquisition device each point into scene as pixel value, can directly reflect the geometry of scenery visible surface
Shape.
Background depth image, can be the depth image not comprising finger to be identified.For example, in gesture interaction scene,
Finger to be identified is the finger for sending gesture, and background depth image is the not interactive depth map not comprising finger shot before
Picture.Depth image to be split, can be the image comprising background image and finger to be identified.For example, in the shooting of interaction moment
The depth image for including finger.
In this step, the difference operation of depth value is carried out to background depth image and depth image to be split, it is believed that
It is to remove the background parts in depth image to be split, obtains the target depth image for including finger to be identified.Generally, target
The image data amount of depth image is big, if directly applying to follow-up image recognition, will cause image recognition processes amount of calculation
Excessive, recognition efficiency is low.Therefore, execution subsequent step is also needed to split finger to be identified from target depth image.
For step 102, target depth image includes two large divisions, and one is finger to be identified, secondly being noise and sky
The non-identifying object of region composition.In theory, in the case of not considering noise, in target depth image, outside finger to be identified
Dummy section pixel pixel value should all be 0.Consequently, to facilitate hand to be identified is partitioned into from target depth image
Refer to, binaryzation can be carried out to target depth image according to the depth value tag of finger to be identified.
Finger to be identified has certain geometry, the distance of each pixel geometrically to image acquisition device
Difference, so the depth value of finger to be identified has a depth value scope.Optionally, binaryzation is being carried out to target depth image
When, can be using depth value scope corresponding to finger to be identified as binary-state threshold.
, can be first according to institute when splitting the image-region where finger to be identified from binary image for step 103
The pixel characteristic of binary image is stated, binary image can be divided into M*N image block.M and N is positive integer, and M represents division
Horizontal image block number afterwards, N represent image block number longitudinal after dividing.The two be able to can not also be waited with equal, can also basis
The transverse and longitudinal ratio of binary image adaptively determines.
Secondly, can according to corresponding to each image block in M*N image block binaryzation result, selected from M*N image block
Take effective image block.Binaryzation result corresponding to each image block, it may include picture corresponding to the pixel that each image block includes
Plain value and pixel value are respectively the number of 0 and 1 pixel.By binaryzation result corresponding to each image block, can determine whether
Whether the image block is effective image block.Optionally, if an image block is effective image block, it is believed that the image block is included and treated
Identify the partial pixel point of finger.
Finally, the position according to effective image block in binary image, finger to be identified is split from binary image
The image-region at place.Position of the effective image block in binary image, it is believed that effective image block is in binary image
Corresponding abscissa and ordinate in M*N image block.According to the abscissa and ordinate of effective image block, you can from two
The region segmentation where effective image block is come out in value image, as the image-region where finger to be identified.
It is corresponding with reference to finger to be identified after the target depth image comprising finger to be identified is obtained in the present embodiment
Depth value scope image-region where finger to be identified is partitioned into from depth image to be split, precise and high efficiency, improve
The efficiency of image recognition.
Fig. 2 a are the flow signals of another finger areas dividing method based on depth image provided in an embodiment of the present invention
Figure, with reference to Fig. 2 a, this method includes:
Step 201, the difference operation that depth value is carried out to background depth image and depth image to be split, to obtain target
Depth image.
Step 202, by target depth image, depth value is in the pixel corresponding to finger to be identified in the range of depth value
Pixel value be set to 1, and the pixel value of pixel of the depth value outside the depth value scope is set to 0, to obtain two-value
Change image.
Step 203, binary image is divided into M*N image block.
Step 204, effective image block P is determined from M*N image blockij, and according to effective image block PijIn binary picture
Lateral coordinates i and longitudinal coordinate j as in are added separately in lateral coordinates array { i } and longitudinal coordinate array { j }.
Step 205, from lateral coordinates array { i }, choose maximum transversal coordinate i-max and minimum lateral coordinate i-min
And from longitudinal coordinate array { j }, choose maximum longitudinal coordinate j-max and minimum longitudinal coordinate j-min.
Step 206, according to maximum transversal coordinate i-max, minimum lateral coordinate i-min, maximum longitudinal coordinate j-max and
Minimum longitudinal coordinate j-min, the image-region being partitioned into from binary image where the finger to be identified.
For step 201, optionally, the difference that depth value is carried out to background depth image and depth image to be split is transported
Calculate, can be for the depth of each pixel on the depth value of each pixel on background depth image and depth image to be split
Value makes the difference.
It should be appreciated that background depth image and depth image to be split are constant in position of finding a view by same capture apparatus
Under the premise of, the two images that are photographed with identical acquisition parameters.Therefore, the pixel on using the same coordinate system mark image
During point, there is coordinate corresponding relation in background depth image with depth image to be split.Therefore, when doing the difference operation of depth value,
Can be according to background depth image and the coordinate corresponding relation of depth image to be split, the depth value of coordinates computed identical pixel
Difference.
For example, in At Background Depth image A, point A31、A32、A33And A34Corresponding depth value is x1、x2、x3And x4;
In depth image B to be split, point B31、B32、B33And B34Corresponding depth value is y1、y2、y3And y4;Then, target depth
In image C, C31Corresponding depth value is A31-B31=x1-y1、C32Corresponding depth value is A32-B32=x2–y2、C33Corresponding depth
Angle value is A33-B33=x3–y3、C34Corresponding depth value is A34-B34=x4–y4。
For step 202, after obtaining target depth image, the depth value of pixel in target depth image is judged one by one
Whether corresponding to finger to be identified in the range of depth value, and according to judged result by target depth image binaryzation.
Wherein, the feature of the depth value scope with the finger to be identified geometry of itself is associated.For example, work as hand to be identified
When referring to send the finger of gesture, according to the shape and thickness of finger tripe, the depth value may range from [0mm, 30mm].
Now, when carrying out binaryzation to target depth image, the pixel value of pixel of the depth value in [0mm, 30mm] is set to
1, the pixel value of pixel of the depth value outside [0mm, 30mm] is set to 0, and then obtain including the binary image of finger.
For step 203, binary image is divided into M*N image block, wherein, M and N can be with identical, can also not
Together, the present embodiment is not limited.As shown in Figure 2 c, binary image is divided into 7*6 image block.
When choosing M and N value, too small M and N values are easily caused the obtained image of segmentation compared to before undivided, several
Unobvious are reduced according to amount.Excessive M and N values are easily caused the increase of cutting procedure amount of calculation, are unfavorable for being lifted the efficiency of image recognition.
Therefore, optionally, in the present embodiment, M and N can be set to the empirical value associated with the size of binary image, can basis
The size and dimension scale of divided binary image carry out accommodation.Preferably, in the present embodiment, when
During M=N=10, for various sizes of binary image, its segmentation effect and segmentation efficiency can reach preferable balance.
It is positive integer for step 204, i and j, represents that image block is right in M*N image block of binary image respectively
The value for the abscissa answered and the value of ordinate.
As shown in Figure 2 b, effective image block P is determined from M*N image blockij, and according to effective image block PijIn two-value
Change the lateral coordinates i in image and longitudinal coordinate j obtains in lateral coordinates array { i } and longitudinal coordinate array { j }, to lead to
Following specific implementation process is crossed to realize:
Stp0, initialization i=0, j=0.
Stp1, i=1+1, j=0.
Stp2, j=j+1.
Stp3, statistical picture block PijComprising pixel value be 1 pixel number;Wherein, i ∈ [1, M], j ∈ [1,
N]。
Stp4, judge image block PijComprising pixel value be 1 the number of pixel whether be more than or equal to the point specified
Number threshold value;If it is, perform step Stp5;If it has not, perform step Stp6.
Stp5, determine image block PijFor effective image block, and by image block PijHorizontal seat in the binary image
Mark i and longitudinal coordinate j is added separately in lateral coordinates array { i } and longitudinal coordinate array { j }.
Stp6, judge whether j is more than N;If it is, perform step Stp7;If it has not, perform step Stp2.
Stp7, judge whether i is more than M;If it is, terminating to perform, lateral coordinates array { i } and longitudinal coordinate number are obtained
In group { j };If it has not, perform step Stp1.
In embodiment described in above-mentioned Stp0~Stp7, statistical picture block PijComprising pixel value be 1 pixel
Number perform line by line, can also be performed by column in actual implementation process, the present embodiment does not limit.
Wherein, for Stp4, the points threshold value specified is the total individual of 1 pixel with the pixel value that binary image includes
Number of effective points ratio ratio corresponding to counting nums-all and target identification finger is associated.Optionally, the points threshold value specified
=k*nums-all*ratio, wherein, k is proportionality coefficient.
Number of effective points ratio ratio corresponding to target identification finger is an empirical value, ratio value and image recognition
The quantity of target identification finger be associated.
When the limited amount of target identification finger, adaptability sets number of effective points ratio ratio, to avoid in image
During segmentation, unnecessary image-region is introduced.Optionally, by verifying repeatedly, when target identification finger quantity 1~10 it
Between when, ratio desirable 0.1 is to obtain preferable image segmentation.For example, when target identification finger is to send amplifying gesture
During 2 fingers, ratio=0.1 can be set, now image segmentation is preferable, is advantageous to lift the accurate of successive image identification
Rate and efficiency.
For step 205, lateral coordinates array { i } and longitudinal coordinate array { j } in this step, are according to binaryzation
What the abscissa and ordinate of the effective image block in image obtained.
For step 206, the image-region where finger to be identified can be a rectangular area.It is determined that the rectangle region
During domain, initial values of the i-min as the rectangular area abscissa, stop values of the i-max as the rectangular area abscissa, j-
Initial values of the min as the rectangular area ordinate, stop values of the j-max as the rectangular area ordinate.
According to the initial value and stop value of transverse and longitudinal coordinate, one can be uniquely determined by P(i-min)(j-min)、
P(i-min)(j-max)、P(i-max)(j-min)And P(i-max)(j-max)The rectangle surrounded, the rectangle are the image where finger to be identified
Region.
Such as shown in Fig. 2 c, i-min=1, i-max=4, j-min=3, j-max=5, then it can determine that finger institute to be identified
Image-region be P13、P15、P43And P45The region as shown in Figure 2 d surrounded.
It is corresponding with reference to finger to be identified after the target depth image comprising finger to be identified is obtained in the present embodiment
Depth value scope image-region where finger to be identified is partitioned into from depth image to be split, greatly reduce for scheming
The data volume of the view data of picture identification, improve the efficiency of image recognition.In addition, based on depth image, to finger institute
Screened in region so that the accuracy of subsequent singulation is high.
Generally, due to substantial amounts of noise being present in background depth image and depth image to be split, thus cause to make the difference
In obtained target depth image, in addition to finger to be identified, also including substantial amounts of noise, it is unfavorable for subsequently carrying out image knowledge
Not.To solve this defect, in the above-mentioned or following embodiment of the application, from M*N image block choose effective image it
Before, can also denoising be carried out to binary image, to remove the noise included in binary image.Optionally, the present invention is real
Apply to use in example and first corrode the opening operation denoising that expands afterwards to eliminate the noise on image, and the border of smooth finger to be identified.
When carrying out corrosion treatment to binary image, using the Erodent Algorithm of n neighborhoods, n can take 3, that is, corrode denoising
Template include 3*3=9 pixel.For the pixel that pixel value in binary image is 1, if the neighborhood of the pixel 3
The pixel value of 8 interior pixels is not all 1, then the pixel value for the pixel that the pixel value is 1 is set into 0, such as Fig. 2 e institutes
Show.By the denoising process of above-mentioned corrosion treatment, the noise spot in binary image can be eliminated.
After corrosion treatment, for the pixel that pixel value in the binary image is 1, by the n of the pixel
As the pixel value of neighborhood vegetarian refreshments is set to 1, to fill up the cavity after corrosion treatment in the finger to be identified.Herein, n can be with
3 are taken, as shown in figure 2f.
Fig. 3 a are the structural representations of image segmenting device provided in an embodiment of the present invention, with reference to Fig. 3 a, the device bag
Include:
Image collection module 301, the difference for carrying out depth value to background depth image and depth image to be split are transported
Calculate, to obtain target depth image, the target depth image includes finger to be identified;
Binarization block 302, for the depth value scope according to corresponding to the finger to be identified, to the target depth figure
As carrying out binaryzation, to obtain binary image;
Binary image splits module 303, for the pixel characteristic according to the binary image, from the binary picture
Image-region where splitting the finger to be identified as in.
Still optionally further, described image acquisition module 301 is specifically used for:Obtain each picture on the background depth image
The depth value of each pixel on the depth value of vegetarian refreshments and the depth image to be split;According to the background depth image with
The coordinate corresponding relation of the depth image to be split, the difference of the depth value of coordinates computed identical pixel;According to the seat
The difference of the depth value of identical pixel is marked, generates target depth image.
Still optionally further, the binarization block 302 is specifically used for:By in the target depth image, depth value exists
The pixel value of pixel in the range of the depth value is set to 1, and pixel of the depth value outside the depth value scope
Pixel value be set to 0, to obtain the binary image.
Still optionally further, the depth value scope is [0mm, 30mm].
Still optionally further, the segmentation module 303 is specifically used for:The binary image is divided into M*N image
Block, wherein M and N are positive integer;For any image block P in the M*N image blockij, statistics described image block PijComprising
Pixel value be 1 pixel number;Wherein, i ∈ [1, M], j ∈ [1, N];If described image block PijComprising pixel value be
The number of 1 pixel is more than or equal to the points threshold value specified, it is determined that described image block PijFor effective image block;According to institute
State effective image block PijPosition in the binary image, the finger institute to be identified is split from the binary image
Image-region.
Still optionally further, the selection module 303 is additionally operable to:By described image block PijIn the binary image
Lateral coordinates i and longitudinal coordinate j is added separately in lateral coordinates array and longitudinal coordinate array;The selection module
303 are specifically used for:From the lateral coordinates array, maximum transversal coordinate and minimum lateral coordinate are chosen;And indulged from described
Into coordinate array, maximum longitudinal coordinate and minimum longitudinal coordinate are chosen;According to the maximum transversal coordinate, the minimum horizontal stroke
To coordinate, the maximum longitudinal coordinate and the minimum longitudinal coordinate, it is partitioned into from the binary image and described waits to know
Image-region where other finger.
Still optionally further, the selection module 303 is specifically used for, chosen from described image block effective image block it
Before, obtain the total number of the pixel that the pixel value that the binary image includes is 1;According to the total number of the pixel with
And number of effective points ratio corresponding to target identification finger, determine the points threshold value specified.
Still optionally further, as shown in Figure 3 b, described device also includes denoising module 304, and the denoising module 304 is used
In:Before effective image is chosen from the M*N image block, for the pixel that pixel value in the binary image is 1
Point, if the pixel value of the n neighborhood territory pixels point of the pixel is not all 1, the pixel value of the pixel is set to 0, with right
The binary image carries out denoising;After denoising, for the pixel that pixel value in the binary image is 1
Point, the pixel value of the n neighborhood territory pixels point of the pixel is set to 1, to be carried out to the binary image at edge smoothing
Reason.
Image segmenting device provided in an embodiment of the present invention, obtain comprising finger to be identified target depth image it
Afterwards, depth value scope is partitioned into the image where finger to be identified from depth image to be split with reference to corresponding to finger to be identified
Region, precise and high efficiency, improve the efficiency of image recognition.
Device embodiment described above is only schematical, wherein the unit illustrated as separating component can
To be or may not be physically separate, it can be as the part that unit is shown or may not be physics list
Member, you can with positioned at a place, or can also be distributed on multiple NEs.It can be selected according to the actual needs
In some or all of module realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not paying creativeness
Work in the case of, you can to understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can
Realized by the mode of software plus required general hardware platform, naturally it is also possible to pass through hardware.Based on such understanding, on
The part that technical scheme substantially in other words contributes to prior art is stated to embody in the form of software product, should
Computer software product can store in a computer-readable storage medium, such as ROM/RAM, magnetic disc, CD, including some fingers
Make to cause a computer equipment (can be personal computer, server, or network equipment etc.) to perform each implementation
Method described in some parts of example or embodiment.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
The present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:It still may be used
To be modified to the technical scheme described in foregoing embodiments, or equivalent substitution is carried out to which part technical characteristic;
And these modification or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit and
Scope.
Claims (10)
- A kind of 1. finger areas dividing method based on depth image, it is characterised in that including:The difference operation of depth value is carried out to background depth image and depth image to be split, to obtain target depth image, institute Stating target depth image includes finger to be identified;According to depth value scope corresponding to the finger to be identified, binaryzation is carried out to the target depth image, to obtain two Value image;According to the pixel characteristic of the binary image, the figure where the finger to be identified is split from the binary image As region.
- 2. according to the method for claim 1, it is characterised in that background depth image and depth image to be split are carried out The difference operation of depth value, to obtain target depth image, including:Obtain on the background depth image each pixel on the depth value of each pixel and the depth image to be split The depth value of point;According to the background depth image and the coordinate corresponding relation of the depth image to be split, coordinates computed identical pixel The difference of the depth value of point;According to the difference of the depth value of the coordinate identical pixel, target depth image is generated.
- 3. according to the method for claim 1, it is characterised in that according to depth value scope corresponding to the finger to be identified, Binaryzation is carried out to the target depth image, to obtain binary image, including:By in the target depth image, the pixel value of pixel of the depth value in the range of the depth value is set to 1, Yi Jishen The pixel value of pixel of the angle value outside the depth value scope is set to 0, to obtain the binary image.
- 4. according to the method for claim 3, it is characterised in that the depth value scope is [0mm, 30mm].
- 5. according to the method for claim 1, it is characterised in that according to the pixel characteristic of the binary image, from described Split the image-region where the finger to be identified in binary image, including:The binary image is divided into M*N image block, wherein M and N are positive integer;For any image block P in the M*N image blockij, statistics described image block PijComprising pixel value be 1 pixel The number of point;Wherein, i ∈ [1, M], j ∈ [1, N];If described image block PijComprising pixel value be 1 the number of pixel be more than or equal to the points threshold value specified, then really Determine described image block PijFor effective image block;According to the effective image block PijPosition in the binary image, treated from the binary image described in segmentation Identify the image-region where finger.
- 6. according to the method for claim 5, it is characterised in that according to the effective image block PijIn the binary image In position, the image-region where the finger to be identified is split from the binary image, including:By described image block PijLateral coordinates i and longitudinal coordinate j in the binary image are added separately to horizontal seat Mark in array and longitudinal coordinate array;According to position of the effective image block in the binary image, wait to know described in segmentation from the binary image Image-region where other finger, including:From the lateral coordinates array, maximum transversal coordinate and minimum lateral coordinate are chosen;AndFrom the longitudinal coordinate array, maximum longitudinal coordinate and minimum longitudinal coordinate are chosen;Sat according to the maximum transversal coordinate, the minimum lateral coordinate, the maximum longitudinal coordinate and the minimum longitudinal direction Mark, the image-region being partitioned into from the binary image where the finger to be identified.
- 7. the method according to claim 5 or 6, it is characterised in that determine it is described specify points threshold value the step of include:Obtain the total number for the pixel that the pixel value that the binary image includes is 1;According to number of effective points ratio corresponding to the total number of the pixel and target identification finger, the point specified is determined Number threshold value.
- 8. according to method according to any one of claims 1 to 6, it is characterised in that split institute from the binary image Before stating the image-region where finger to be identified, in addition to:For the pixel that pixel value in the binary image is 1, if the pixel value of the n neighborhood territory pixels point of the pixel is not 1 is all, then the pixel value of the pixel is set to 0, to carry out denoising to the binary image;After denoising, for the pixel that pixel value in the binary image is 1, by the n neighborhoods of the pixel The pixel value of pixel is set to 1, to carry out edge smoothing processing to the finger to be identified.
- A kind of 9. image segmenting device, it is characterised in that including:Image collection module, for carrying out the difference operation of depth value to background depth image and depth image to be split, with To target depth image, the target depth image includes finger to be identified;Binarization block, for the depth value scope according to corresponding to the finger to be identified, the target depth image is carried out Binaryzation, to obtain binary image;Binary image splits module, for the pixel characteristic according to the binary image, divides from the binary image Cut the image-region where the finger to be identified.
- 10. device according to claim 9, it is characterised in that described device also includes denoising module, the denoising module For:Before the image-region where splitting the finger to be identified from the binary image, for the binary picture Pixel value is 1 pixel as in, if the pixel value of the n neighborhood territory pixels point of the pixel is not all 1, by the pixel Pixel value be set to 0, with to the binary image carry out denoising;After denoising, for the pixel that pixel value in the binary image is 1, by the n neighborhoods of the pixel The pixel value of pixel is set to 1, to carry out edge smoothing processing to the binary image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710734621.6A CN107491763A (en) | 2017-08-24 | 2017-08-24 | Finger areas dividing method and device based on depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710734621.6A CN107491763A (en) | 2017-08-24 | 2017-08-24 | Finger areas dividing method and device based on depth image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107491763A true CN107491763A (en) | 2017-12-19 |
Family
ID=60646580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710734621.6A Pending CN107491763A (en) | 2017-08-24 | 2017-08-24 | Finger areas dividing method and device based on depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107491763A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108521545A (en) * | 2018-03-26 | 2018-09-11 | 广东欧珀移动通信有限公司 | Image adjusting method, device, storage medium based on augmented reality and electronic equipment |
CN109766831A (en) * | 2019-01-09 | 2019-05-17 | 深圳市三宝创新智能有限公司 | A kind of road colour band recognition methods, device, computer equipment and storage medium |
CN109829886A (en) * | 2018-12-25 | 2019-05-31 | 苏州江奥光电科技有限公司 | A kind of pcb board defect inspection method based on depth information |
CN109934873A (en) * | 2019-03-15 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | Mark image acquiring method, device and equipment |
CN110189297A (en) * | 2019-04-18 | 2019-08-30 | 杭州电子科技大学 | A kind of magnetic material open defect detection method based on gray level co-occurrence matrixes |
CN111368675A (en) * | 2020-02-26 | 2020-07-03 | 深圳市瑞立视多媒体科技有限公司 | Method, device and equipment for processing gesture depth information and storage medium |
CN115019157A (en) * | 2022-07-06 | 2022-09-06 | 武汉市聚芯微电子有限责任公司 | Target detection method, device, equipment and computer readable storage medium |
CN117271974A (en) * | 2023-09-25 | 2023-12-22 | 广东科研世智能科技有限公司 | Data patching method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294996A (en) * | 2013-05-09 | 2013-09-11 | 电子科技大学 | 3D gesture recognition method |
CN103544472A (en) * | 2013-08-30 | 2014-01-29 | Tcl集团股份有限公司 | Processing method and processing device based on gesture images |
CN103598870A (en) * | 2013-11-08 | 2014-02-26 | 北京工业大学 | Optometry method based on depth-image gesture recognition |
WO2017113794A1 (en) * | 2015-12-31 | 2017-07-06 | 北京体基科技有限公司 | Gesture recognition method, control method and apparatus, and wrist-type device |
-
2017
- 2017-08-24 CN CN201710734621.6A patent/CN107491763A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294996A (en) * | 2013-05-09 | 2013-09-11 | 电子科技大学 | 3D gesture recognition method |
CN103544472A (en) * | 2013-08-30 | 2014-01-29 | Tcl集团股份有限公司 | Processing method and processing device based on gesture images |
CN103598870A (en) * | 2013-11-08 | 2014-02-26 | 北京工业大学 | Optometry method based on depth-image gesture recognition |
WO2017113794A1 (en) * | 2015-12-31 | 2017-07-06 | 北京体基科技有限公司 | Gesture recognition method, control method and apparatus, and wrist-type device |
Non-Patent Citations (3)
Title |
---|
刘志广: "无标记手掌运动捕捉方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
寿向晨 等: "基于协方差矩阵的运动目标跟踪方法研究", 《电气自动化》 * |
李瑞峰 等: "基于深度图像和表观特征的手势识别", 《华中科技大学学报(自然科学版)》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108521545B (en) * | 2018-03-26 | 2020-02-11 | Oppo广东移动通信有限公司 | Image adjusting method and device based on augmented reality, storage medium and electronic equipment |
CN108521545A (en) * | 2018-03-26 | 2018-09-11 | 广东欧珀移动通信有限公司 | Image adjusting method, device, storage medium based on augmented reality and electronic equipment |
CN109829886A (en) * | 2018-12-25 | 2019-05-31 | 苏州江奥光电科技有限公司 | A kind of pcb board defect inspection method based on depth information |
CN109766831A (en) * | 2019-01-09 | 2019-05-17 | 深圳市三宝创新智能有限公司 | A kind of road colour band recognition methods, device, computer equipment and storage medium |
CN109934873B (en) * | 2019-03-15 | 2021-11-02 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for acquiring marked image |
CN109934873A (en) * | 2019-03-15 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | Mark image acquiring method, device and equipment |
CN110189297A (en) * | 2019-04-18 | 2019-08-30 | 杭州电子科技大学 | A kind of magnetic material open defect detection method based on gray level co-occurrence matrixes |
CN110189297B (en) * | 2019-04-18 | 2021-02-19 | 杭州电子科技大学 | Magnetic material appearance defect detection method based on gray level co-occurrence matrix |
CN111368675A (en) * | 2020-02-26 | 2020-07-03 | 深圳市瑞立视多媒体科技有限公司 | Method, device and equipment for processing gesture depth information and storage medium |
CN111368675B (en) * | 2020-02-26 | 2023-06-20 | 深圳市瑞立视多媒体科技有限公司 | Gesture depth information processing method, device, equipment and storage medium |
CN115019157A (en) * | 2022-07-06 | 2022-09-06 | 武汉市聚芯微电子有限责任公司 | Target detection method, device, equipment and computer readable storage medium |
CN115019157B (en) * | 2022-07-06 | 2024-03-22 | 武汉市聚芯微电子有限责任公司 | Object detection method, device, equipment and computer readable storage medium |
CN117271974A (en) * | 2023-09-25 | 2023-12-22 | 广东科研世智能科技有限公司 | Data patching method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107491763A (en) | Finger areas dividing method and device based on depth image | |
US9251614B1 (en) | Background removal for document images | |
CN112991193B (en) | Depth image restoration method, device and computer-readable storage medium | |
CN108647634A (en) | Framing mask lookup method, device, computer equipment and storage medium | |
JP2016505186A (en) | Image processor with edge preservation and noise suppression functions | |
Laguna et al. | Traffic sign recognition application based on image processing techniques | |
CN107909571A (en) | A kind of weld beam shape method, system, equipment and computer-readable storage medium | |
CN105427346B (en) | A kind of motion target tracking method and system | |
WO2021253723A1 (en) | Human body image processing method and apparatus, electronic device and storage medium | |
CN109255792B (en) | Video image segmentation method and device, terminal equipment and storage medium | |
JP6338429B2 (en) | Subject detection apparatus, subject detection method, and program | |
CN107016417A (en) | A kind of method and device of character recognition | |
CN109829510A (en) | A kind of method, apparatus and equipment of product quality classification | |
CN110544300A (en) | Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics | |
CN110188640B (en) | Face recognition method, face recognition device, server and computer readable medium | |
CN112991159B (en) | Face illumination quality evaluation method, system, server and computer readable medium | |
Srikakulapu et al. | Depth estimation from single image using defocus and texture cues | |
JP2019504430A5 (en) | ||
CN104408430B (en) | License plate positioning method and device | |
CN108898045B (en) | Multi-label image preprocessing method based on deep learning gesture recognition | |
CN115937075A (en) | Texture fabric flaw detection method and medium based on unsupervised mode | |
CN112270683B (en) | IHC digital preview image identification and organization foreground segmentation method and system | |
CN115222652A (en) | Method for identifying, counting and centering end faces of bundled steel bars and memory thereof | |
KR101756959B1 (en) | Image analyze method and apparatus thereby | |
JP6580201B2 (en) | Subject detection apparatus, subject detection method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171219 |
|
RJ01 | Rejection of invention patent application after publication |