CN103598870A - Optometry method based on depth-image gesture recognition - Google Patents

Optometry method based on depth-image gesture recognition Download PDF

Info

Publication number
CN103598870A
CN103598870A CN201310552006.5A CN201310552006A CN103598870A CN 103598870 A CN103598870 A CN 103598870A CN 201310552006 A CN201310552006 A CN 201310552006A CN 103598870 A CN103598870 A CN 103598870A
Authority
CN
China
Prior art keywords
gesture
depth
region
point
finger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310552006.5A
Other languages
Chinese (zh)
Inventor
段立娟
邱硕
马伟
杨震
陈建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201310552006.5A priority Critical patent/CN103598870A/en
Publication of CN103598870A publication Critical patent/CN103598870A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image processing, and discloses an optometry method based on depth-image gesture recognition. The optometry method includes firstly, acquiring depth images by a depth-data acquisition device; secondly, recognizing gesture regions by means of the depth images, and extracting gesture features according to the recognized regions; finally, recognizing gestures of a testee according to the extracted gesture features, and matching and judging according to gap directions of characters E displayed on a display screen to acquire final optometry results. The depth-data acquisition device is utilized to dynamically acquire the depth images, and the existing gesture recognition technology is improved, so that dynamic gesture recognition is realized; the dynamic gesture recognition technology is applied to optometry, so that simplicity and convenience of conventional optometry are kept, the defects of time and labor consumption of the conventional optometry are overcome, and interestingness of optometry processes is improved.

Description

A kind of vision testing method based on depth image gesture identification
Technical field
The invention belongs to image processing field, relate to a kind of vision testing method based on depth image information gesture identification.
Background technology
The most traditional in vision testing method is exactly the visual acuity chart detection method of doctor and a pair of same form of tested person, tested population answer or the indication visual acuity chart that uses gesture in the opening direction of letter " E " of doctor's appointment.Although this method is simple, its checking process is uninteresting, and needs doctor to accompany for a long time, and hand-kept, needs a large amount of human resources and time cost.
If computer image processing technology can be applied to, in vision detection, tested person's gesture be pointed to and identified, will effectively improve traditional vision testing method.
In computer realm, yet gesture is a kind of from interactive mode intuitively, and gesture identification is also, in man-machine interaction in recent years, user is limited to minimum recognition methods.Gesture identification is controlled the aspects such as mobile phone somatic sensation television game dummy keyboard computer control and is widely applied in robot.For example, the dynamic gesture identification method based on Skin Color Information, it utilizes Skin Color Information to carry out the extraction of hand images, obtains the broca scale picture that comprises hand, and then extracts gesture and identify, but the method is easily subject to the impact of color and the close signal of the colour of skin.Based on histogram of gradients (HOG, Histograms Of Oriented Gradient) gesture identification method of feature, while having solved gesture extraction, illumination variation and gesture rotation are for the impact of recognition result, but the method amount of calculation is too large, and generally can only identify the limited gesture in the storehouse of prior foundation.2012, young clear grade of Cao proposed the gesture identification method based on depth image technology, and the method, by utilizing depth information to carry out Hand Gesture Segmentation extraction, has been avoided the interference of class Skin Color Information, has very strong robustness.But the method can only be identified 9 kinds of common gestures, some conventional gestures in the detection of None-identified vision, left such as singly pointing to, under sensing, point to right these three gestures, therefore, can't directly apply in vision detection.
Summary of the invention
The object of the invention is to, a kind of vision testing method based on depth image gesture identification is proposed, scene depth is obtained to the depth extraction technology of equipment (such as Kinect etc.) and depth image Gesture Recognition to be applied to during vision detects, realize fast accurate vision in unmanned supervision situation and detect simultaneously.
Ultimate principle of the present invention is, when tested person sees that the content of demonstration screen display is made the gesture of corresponding judgement, the depth data of integrating out by scene depth extraction equipment, Dynamic Recognition gesture, pass through again a series of man-machine interaction, thereby obtain vision testing result.
Technical scheme of the present invention is as follows: first utilize depth data to obtain equipment and obtain depth image; Then utilize depth image to carry out the identification in gesture region, according to the region recognizing, carry out the extraction of gesture feature; The last gesture feature according to extracting carries out the identification of tested person's gesture.Gesture motion has four kinds, points to respectively upper and lower, left and right.Judge after these actions, carry out matching judgment with shown on display screen " E " word breach direction, obtain the end product that vision detects.
Compared with prior art, the present invention has the following advantages:
(1) the present invention utilizes the degree of depth to obtain equipment and obtains depth image, and can be applied to by improving existing Gesture Recognition, having realized the dynamic gesture identification that vision detects;
(2) the present invention is applied to vision detection by Gesture Recognition, has not only kept the original simplicity of traditional method, has also overcome the shortcoming that traditional detection method is time-consuming, require great effort simultaneously, has increased the interest of testing process.
Accompanying drawing explanation
Fig. 1 is embodiment of the present invention vision checkout equipment connection diagram;
Fig. 2 is the main flow chart of vision testing method involved in the present invention;
Fig. 3 is the grey level histogram of embodiment of the present invention depth image;
Fig. 4 is embodiment of the present invention gesture Region Segmentation effect schematic diagram;
Fig. 5 is gesture area pixel value change point labelling schematic diagram;
Fig. 6 is that gesture is pointed to judgement schematic diagram.
The specific embodiment
Below in conjunction with drawings and Examples, the present invention will be further described.
Carry out the equipment connection schematic diagram of vision detection as shown in Figure 1, comprising: Kinect equipment, computer, display, audio amplifier, and tested person.Kinect equipment obtains for depth image.Computer and Kinect equipment connection, and operation realizes the software of the method for the invention.Display is crossed HDMI mouth with computer expert and is connected, for presenting human-computer interaction interface.Audio amplifier is connected with computer, for playing voice message.
The main flow chart of vision testing method involved in the present invention as shown in Figure 2, specifically comprises the following steps:
Step 1, tested person makes the gesture that detects letter " E " opening direction on reflection indicator screen.
Tested person stands in depth extraction equipment dead ahead, detects the opening direction of letter " E " according to oneself on the screen of seeing, depth extraction equipment is made to corresponding gesture (an only perpendicular finger), and kept for 1~2 second.Arm protracts to body as far as possible, and approximately 30 degree angles, make depth extraction equipment can screen out the arm depth bounds different with health when the dynamic extraction degree of depth obliquely.
Step 2, depth extraction equipment kinect extracts depth image.
Kinect equipment is by infrared light, and visible light lens is as input, by considering a depth image of the comprehensive generation of the complex parameters such as illumination, texture, scene.
Step 3, makes gray-scale pixels statistics by depth image, generates gray-scale map, and gray value scope is 0~255.
Step 4, generates grey level histogram, determines the depth threshold of cutting apart of gesture region and background, and method is as follows:
1. add up the pixel count of each depth bounds, generate grey level histogram.The grey level histogram generating as shown in Figure 3.
In interactive process, before gesture motion is all placed in health conventionally, gesture region is positioned at different depth value scopes from background area, utilizes that this feature is just divisible goes out gesture region.The pixel of same degree of depth gray value in depth image is identical, but the distance of each people and degree of depth photographic head is incomplete same, the depth value of existing position on depth image, corresponding gesture region is also different, therefore can not find gesture region and complete cutting apart of gesture region and background area by fixing depth threshold, use degree of depth rectangular histogram can address this problem well.
2. on grey level histogram, descending traversal pixel number along between gray area, calculates the variable quantity of pixel number while changing between each gray area.Find pixel number variable quantity to be greater than for the first time the gray value of variable quantity threshold value, this gray value is exactly the depth threshold of cutting apart of gesture region and background.Pixel number variable quantity threshold value is generally 1.0 * 10 4~1.5 * 10 4between.
As shown in the region of being irised out by curve in Fig. 3, when gray value is 235, pixel number changes obviously, just this region is gesture region, 235 are the depth threshold of cutting apart of gesture region and background.
Step 5, according to the depth threshold of cutting apart of the definite gesture region of step 4 and background, carries out binary conversion treatment to the image of gesture region and background area, is partitioned into images of gestures from background.
After binary conversion treatment, background area is black, and gesture region is white, thereby can from background, be partitioned into easily images of gestures.Image effect after binaryzation as shown in Figure 4.
Step 6, extracts finger feature, has or not gesture judgement, and method is as follows:
1. by the corrosion operation in mathematical morphology, obtain gesture regional center location point.
Palm occupies maximum area in gesture region, and point is comparatively concentrated.By continuous corrosion, operate, can eliminate the boundary point in gesture region, gesture region is progressively dwindled, finally obtain the central point C in gesture region.
2. in gesture region, draw circle track.
The ultimate range S of computer center's point C and gesture edges of regions, by ultimate range 10 deciles, every part apart from d=S/10.Take gesture regional center point C as the center of circle, the i*d of take does round track as radius, i=1,2 ..., 10, obtain 10 round tracks.
3. justifying labelling pixel value change point on track.
Record in the direction of the clock every pixel value change point in round trajectory.From black region, be transformed into the change point P of white portion ijrepresent, from white portion, be transformed into the some Q of black region ijrepresent, i represents the numbering of locus circle, and j represents the numbering that P on same locus circle or Q orders, often runs into a change point, and P or Q number j and add one.After labelling completes, deletion subscript ij is identical but P and Q do not have the point occurring in pairs.The labelling of gesture area pixel value change point as shown in Figure 5.
4. calculate finger quantity.
First ask the maximum J of the upper j obtaining of each locus circle i i=max (i, j), then ask the maximum J of j on all locus circles imaximum N0=max (J i), N0 is the numbers of branches summation being connected with palm.Owing to comprising finger and wrist branch in branch, point quantity N=N0-1.
5. judge whether N equals 1, if 1, go to step seven and carry out gesture identification; Otherwise judge without gesture.
Step 7, carries out gesture and points to identification, and method is as follows:
1. finger areas is corroded, find the central point M of finger.
2. calculate the tangent value of the included angle A of finger central point M and palm central point C line and transverse axis or the longitudinal axis, judge finger orientation region, as shown in Figure 6, the region that black curve is drawn is that gesture is judged effective coverage.
3. according to the sensing of the black region judgement gesture at M point place, it is identical that gesture is pointed to the direction (upper and lower, left and right) corresponding with M point region.
Step 8, whether repeating step one~seven, conform to the opening direction that detects letter " E " according to gesture identification result, measures tested person's vision rank.

Claims (4)

1. the vision testing method based on depth image gesture identification, is characterized in that depth image Gesture Recognition to be applied to vision detection, comprises the following steps:
Step 1, tested person makes the hands gesture that detects letter " E " opening direction on reflection indicator screen;
Step 2, depth extraction equipment extracts depth image;
Step 3, makes gray-scale pixels statistics by depth image, generates gray-scale map, and gray value scope is 0~255;
Step 4, generates grey level histogram, determines the depth threshold of cutting apart of gesture region and background;
Step 5, according to the depth threshold of cutting apart of the definite gesture region of step 4 and background, carries out binary conversion treatment to the image of gesture region and background area, is partitioned into images of gestures from background;
Step 6, extracts finger feature, has or not gesture judgement;
Step 7, carries out gesture and points to identification;
Step 8, whether repeating step one~seven, conform to the opening direction that detects letter " E " according to gesture identification result, measures tested person's vision rank.
2. a kind of vision testing method based on depth image gesture identification according to claim 1, is characterized in that, determines that the method for cutting apart depth threshold of gesture region and background comprises following content described in step 4:
(1) add up the pixel count of each depth bounds, generate grey level histogram;
(2), on grey level histogram, descending traversal pixel number along between gray area, calculates the variable quantity of pixel number while changing between each gray area; Find pixel number variable quantity to be greater than for the first time the gray value of variable quantity threshold value, this gray value is exactly the depth threshold of cutting apart of gesture region and background; Pixel number variable quantity threshold value is generally 1.0 * 10 4~1.5 * 10 4between.
3. a kind of vision testing method based on depth image gesture identification according to claim 1, is characterized in that, extracts the judgement of finger feature described in step 6 and has or not the method for gesture as follows:
(1) by the corrosion operation in mathematical morphology, obtain gesture regional center location point C;
(2) in gesture region, draw equidistant round track;
(3) labelling pixel value change point on circle track, method is as follows:
Record in the direction of the clock every pixel value change point in round trajectory, from black region, be transformed into the change point P of white portion ijrepresent, from white portion, be transformed into the some Q of black region ijrepresent, i represents the numbering of locus circle, and j represents the numbering that P on same locus circle or Q orders, often runs into a change point, and P or Q number j and add one; After labelling completes, deletion subscript ij is identical but P and Q do not have the point occurring in pairs;
(4) calculate finger quantity, method is as follows:
First ask the maximum J of the upper j obtaining of each locus circle i i=max (i, j), then ask the maximum J of j on all locus circles imaximum N0=max (J i), N0 is the numbers of branches summation being connected with palm; Owing to comprising finger and wrist branch in branch, point quantity N=N0-1;
(5) judge whether N equals 1, if 1, carry out gesture and point to identification; Otherwise judge without gesture.
4. a kind of vision testing method based on depth image gesture identification according to claim 1, is characterized in that, gesture points to that to know method for distinguishing as follows described in step 7:
(1) finger areas is corroded, find the central point M of finger;
(2) calculate the tangent value of the included angle A of finger central point M and palm central point C line and transverse axis or the longitudinal axis, judge finger orientation region;
(3) according to the sensing of the black region judgement gesture at M point place, it is identical that gesture is pointed to the direction corresponding with M point region.
CN201310552006.5A 2013-11-08 2013-11-08 Optometry method based on depth-image gesture recognition Pending CN103598870A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310552006.5A CN103598870A (en) 2013-11-08 2013-11-08 Optometry method based on depth-image gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310552006.5A CN103598870A (en) 2013-11-08 2013-11-08 Optometry method based on depth-image gesture recognition

Publications (1)

Publication Number Publication Date
CN103598870A true CN103598870A (en) 2014-02-26

Family

ID=50117181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310552006.5A Pending CN103598870A (en) 2013-11-08 2013-11-08 Optometry method based on depth-image gesture recognition

Country Status (1)

Country Link
CN (1) CN103598870A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104970763A (en) * 2014-04-09 2015-10-14 冯保平 Full-automatic vision detecting training instrument
CN105224809A (en) * 2015-10-16 2016-01-06 中山大学 A kind of self-service examination system based on Kinect and method
CN105643590A (en) * 2016-03-31 2016-06-08 河北工业大学 Wheeled mobile robot controlled by gestures and operation method of wheeled mobile robot
CN106073694A (en) * 2016-07-21 2016-11-09 浙江理工大学 A kind of interactive sighting target display system based on Kinect and sighting target display optimization method
CN106599771A (en) * 2016-10-21 2017-04-26 上海未来伙伴机器人有限公司 Gesture image recognition method and system
CN106709393A (en) * 2015-11-13 2017-05-24 航天信息股份有限公司 QR two-dimensional code binarization method and system
CN107491763A (en) * 2017-08-24 2017-12-19 歌尔科技有限公司 Finger areas dividing method and device based on depth image
CN108634925A (en) * 2018-03-16 2018-10-12 河海大学常州校区 A kind of vision testing system based on gesture identification
CN109472222A (en) * 2018-10-25 2019-03-15 深圳市象形字科技股份有限公司 A kind of auxiliary urheen practitioner's attitude detecting method based on computer vision technique
CN109523567A (en) * 2018-10-25 2019-03-26 深圳市象形字科技股份有限公司 A kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique
CN109543543A (en) * 2018-10-25 2019-03-29 深圳市象形字科技股份有限公司 A kind of auxiliary urheen practitioner's bowing detection method based on computer vision technique
CN109710076A (en) * 2018-12-30 2019-05-03 厦门盈趣科技股份有限公司 A kind of circuit board automatic testing method and device
CN110096991A (en) * 2019-04-25 2019-08-06 西安工业大学 A kind of sign Language Recognition Method based on convolutional neural networks
CN110123258A (en) * 2019-03-29 2019-08-16 深圳和而泰家居在线网络科技有限公司 Method, apparatus, eyesight detection device and the computer storage medium of sighting target identification
CN110276324A (en) * 2019-06-27 2019-09-24 北京万里红科技股份有限公司 The elliptical method of pupil is determined in a kind of iris image
CN111626136A (en) * 2020-04-29 2020-09-04 惠州华阳通用电子有限公司 Gesture recognition method, system and equipment
CN112842249A (en) * 2021-03-09 2021-05-28 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN113243886A (en) * 2021-06-11 2021-08-13 四川翼飞视科技有限公司 Vision detection system and method based on deep learning and storage medium

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104970763A (en) * 2014-04-09 2015-10-14 冯保平 Full-automatic vision detecting training instrument
CN105224809A (en) * 2015-10-16 2016-01-06 中山大学 A kind of self-service examination system based on Kinect and method
CN106709393A (en) * 2015-11-13 2017-05-24 航天信息股份有限公司 QR two-dimensional code binarization method and system
CN105643590A (en) * 2016-03-31 2016-06-08 河北工业大学 Wheeled mobile robot controlled by gestures and operation method of wheeled mobile robot
CN106073694A (en) * 2016-07-21 2016-11-09 浙江理工大学 A kind of interactive sighting target display system based on Kinect and sighting target display optimization method
CN106599771A (en) * 2016-10-21 2017-04-26 上海未来伙伴机器人有限公司 Gesture image recognition method and system
CN107491763A (en) * 2017-08-24 2017-12-19 歌尔科技有限公司 Finger areas dividing method and device based on depth image
CN108634925A (en) * 2018-03-16 2018-10-12 河海大学常州校区 A kind of vision testing system based on gesture identification
CN109543543A (en) * 2018-10-25 2019-03-29 深圳市象形字科技股份有限公司 A kind of auxiliary urheen practitioner's bowing detection method based on computer vision technique
CN109523567A (en) * 2018-10-25 2019-03-26 深圳市象形字科技股份有限公司 A kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique
CN109472222A (en) * 2018-10-25 2019-03-15 深圳市象形字科技股份有限公司 A kind of auxiliary urheen practitioner's attitude detecting method based on computer vision technique
CN109710076A (en) * 2018-12-30 2019-05-03 厦门盈趣科技股份有限公司 A kind of circuit board automatic testing method and device
CN110123258A (en) * 2019-03-29 2019-08-16 深圳和而泰家居在线网络科技有限公司 Method, apparatus, eyesight detection device and the computer storage medium of sighting target identification
CN110096991A (en) * 2019-04-25 2019-08-06 西安工业大学 A kind of sign Language Recognition Method based on convolutional neural networks
CN110276324A (en) * 2019-06-27 2019-09-24 北京万里红科技股份有限公司 The elliptical method of pupil is determined in a kind of iris image
CN110276324B (en) * 2019-06-27 2021-06-22 北京万里红科技股份有限公司 Method for determining pupil ellipse in iris image
CN111626136A (en) * 2020-04-29 2020-09-04 惠州华阳通用电子有限公司 Gesture recognition method, system and equipment
CN111626136B (en) * 2020-04-29 2023-08-18 惠州华阳通用电子有限公司 Gesture recognition method, system and equipment
CN112842249A (en) * 2021-03-09 2021-05-28 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN112842249B (en) * 2021-03-09 2024-04-19 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN113243886A (en) * 2021-06-11 2021-08-13 四川翼飞视科技有限公司 Vision detection system and method based on deep learning and storage medium

Similar Documents

Publication Publication Date Title
CN103598870A (en) Optometry method based on depth-image gesture recognition
CN110232311B (en) Method and device for segmenting hand image and computer equipment
CN102402680B (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Ma et al. Kinect sensor-based long-distance hand gesture recognition and fingertip detection with depth information
CN110032271B (en) Contrast adjusting device and method, virtual reality equipment and storage medium
Nai et al. Fast hand posture classification using depth features extracted from random line segments
CN102508574B (en) Projection-screen-based multi-touch detection method and multi-touch system
CN103218605B (en) A kind of fast human-eye positioning method based on integral projection and rim detection
CN103677274B (en) A kind of interaction method and system based on active vision
CN104504856A (en) Fatigue driving detection method based on Kinect and face recognition
CN104346816A (en) Depth determining method and device and electronic equipment
WO2012117392A1 (en) Device, system and method for determining compliance with an instruction by a figure in an image
CN110232379A (en) A kind of vehicle attitude detection method and system
CN103207709A (en) Multi-touch system and method
CN104123543A (en) Eyeball movement identification method based on face identification
CN106503619B (en) Gesture recognition method based on BP neural network
CN105354812B (en) Multi-Kinect cooperation-based depth threshold segmentation algorithm contour recognition interaction method
CN106447695A (en) Same object determining method and device in multi-object tracking
CN114627186A (en) Distance measuring method and distance measuring device
CN109472257B (en) Character layout determining method and device
Su et al. Smart training: Mask R-CNN oriented approach
CN103761011A (en) Method, system and computing device of virtual touch screen
CN103426000A (en) Method for detecting static gesture fingertip

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140226