CN104850228B - The method of the watching area of locking eyeball based on mobile terminal - Google Patents

The method of the watching area of locking eyeball based on mobile terminal Download PDF

Info

Publication number
CN104850228B
CN104850228B CN201510245605.1A CN201510245605A CN104850228B CN 104850228 B CN104850228 B CN 104850228B CN 201510245605 A CN201510245605 A CN 201510245605A CN 104850228 B CN104850228 B CN 104850228B
Authority
CN
China
Prior art keywords
rate
region
face area
current
bound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510245605.1A
Other languages
Chinese (zh)
Other versions
CN104850228A (en
Inventor
叶林生
张莹雪
夏立
盛斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Artificial Intelligence Research Institute Co., Ltd.
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201510245605.1A priority Critical patent/CN104850228B/en
Publication of CN104850228A publication Critical patent/CN104850228A/en
Application granted granted Critical
Publication of CN104850228B publication Critical patent/CN104850228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Telephone Function (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method of the watching area of the locking eyeball based on mobile terminal, by acquiring face area from video flowing, and therefrom mark off eye estimation region, eye-shaped region is obtained after subdivision and handles to obtain pupil center location by binarization of gray value, and current fixation region is obtained finally by the comparison with normal condition position.The present invention has very high computational efficiency, can be efficiently completed very much calculating task.

Description

The method of the watching area of locking eyeball based on mobile terminal
Technical field
The present invention relates to a kind of technology of field of video processing, specifically a kind of locking eyeball based on mobile terminal Watching area method.
Background technology
Increasingly expansion with the rapid development and user of information technology to mobile terminal equipment performance demand, mainstream mobile phone Screen size show continuous increased trend.Before recalling 5 years, first Android intelligent Google G1 are with 3.2 English When very little screen appears, who is also unimaginable, the developing direction of smart mobile phone and track.With mobile interchange application and social media Rapid development, the increase of smart mobile phone screen size obtained the response of application, and consumer is also unconsciously gradually Gradually receive size constantly increased " big mobile phone ".
Large-screen mobile phone brings the incomparable advantage of small screen mobile phone to user, allows user that can enjoy large-size screen monitors whenever and wherever possible The experience brought.For example navigated on map with large-size screen monitors, more contents can be seen in a screen, and can be seen ground More places on figure eliminate the mobile operation of many scalings, this is a kind of convenience;Or viewing film, read books, When with friend's share photos, large-size screen monitors possess advantageous advantage, or even have many users to indicate, after getting used to large-size screen monitors, very The difficult use for adapting to smaller screen.
While large-screen mobile phone increases user experience, also difficulty is brought to the normal operating of user.General user It is accustomed to and is pleased with the one-handed performance of mobile phone, and large-screen mobile phone then can bring fingers of single hand that can not cover most of operating area Problem.For this problem, there are many solutions, for example Samsung increases Smart in S4 this product Scroll functions realize eyeball control.
Invention content
The present invention is directed to deficiencies of the prior art, proposes a kind of watching attentively for locking eyeball based on mobile terminal The method in region identifies the pupil position of user in camera video stream first, by analyzing the variation of eyeball position, to speculate The watching area of eyes of user, so as to adjust the display strategy of mobile phone.Method used in the present invention is based on opencv It increases income library, there is very high computational efficiency, calculating task can be efficiently completed very much.
The present invention is achieved by the following technical solutions:
The method for the watching area for locking eyeball based on mobile terminal that the present invention relates to a kind of, by being adopted from video flowing Collect face area, and therefrom marks off eye estimation region, eye-shaped region is obtained after subdivision and handled by binarization of gray value Pupil center location is obtained, current fixation region is obtained finally by the comparison with normal condition position.
The face area identifies to obtain by cascade classifier, preferably by moving average and fixed facial regions The mode of domain size obtains.
The division refers to:It is divided in the following way from face area and obtains eye rectangle estimation region:It is left and right The upper edge of eye rectangular area and the upper edge of face area are at a distance of height/3.7;The width of images of left and right eyes rectangular area is respectively width/3;Images of left and right eyes rectangular area is adjacent and center is on screen level direction;The height of images of left and right eyes rectangular area For height/4, wherein:Height is the video height captured, and width is the video width captured, and unit is pixel.
The subdivision refers to:It is identified from eye estimation region using cascade classifier and obtains eye-shaped region.
The normal condition position refers to:Under normal condition, the position of pupil center and the ratio value of face area.
The comparison refers to:Record the ratio of the position and face area of user pupil center in real time in the study stage Value;In the control stage, current region-of-interest is calculated by comparing the variation of ratio value.
Description of the drawings
Fig. 1 is flow chart of the present invention.
Fig. 2 is modular system figure.
Fig. 3 is the design sketch that embodiment learns the stage.
Fig. 4 is the design sketch that embodiment controls the stage.
Fig. 5 is the ratio schematic diagram of embodiment ocular.
Specific implementation mode
It elaborates below to the embodiment of the present invention, the present embodiment is carried out lower based on the technical solution of the present invention Implement, gives detailed embodiment and specific operating process, but protection scope of the present invention is not limited to following implementation Example.
Embodiment 1
As shown in Figure 1, the present embodiment includes the following steps:
The first step obtains video flowing from front camera.
Video flowing is obtained by the method that opencv for android image libraries are provided in the present embodiment, in layout Using org.opencv.android.JavaCameraView controls, by existing to the setting of this view in activity Each frame picture is obtained in onCameraFrame.
Second step, the video flowing obtained according to the first step analyze the face area of user.
The face area identifies to obtain by the cascade classifier that opencv for android image libraries are provided, The lbpcascade_frontalface.xml grader files increased income are loaded, to identify the region of face.
Since what above-mentioned grader cannot be stablized identifies that face, the present embodiment use following two methods to overcome this A problem:
1) moving average:The face area that preceding 3 frame picture recognition goes out is preserved, this recognition result uses current knowledge The average value of other result and preceding recognition result three times, to realize the effect for stablizing the face area identified.
2) fixed face area size:Due to user in operating handset at a distance from mobile phone it is substantially stationary, so using Fixed face area is sized to satisfy the use demand, and plays the role of stablizing the face area identified.
The face's ratio for the face area and ordinary user that third step, basis analyze, the eye for calculating user are estimated Count region.
Since face's organ ratio of mediocrity is in a reasonable range, this is provided calculating rough ocular Foundation.
The ratio that the present embodiment uses is as shown in figure 5, i.e.:The upper edge of images of left and right eyes rectangular area and face area it is upper Edge is at a distance of height/3.7;The width of images of left and right eyes rectangular area is respectively width/3;Images of left and right eyes rectangular area is adjacent and is shielding It is in center in curtain horizontal direction;The height of images of left and right eyes rectangular area is height/4, wherein:Height is regarding for capture Frequency height, width are the video width captured, and unit is pixel.
4th step, the ocular obtained from third step analyze two eye-shaped regions.
Above-mentioned rectangular area also uses the cascade classifier that opencv for android image libraries are provided, load Grader file (the http of the images of left and right eyes of Shiqi Yu training://yushiqi.cn/research/eyedetection), this What sample can be stablized identifies the region of eyes.
5th step carries out binarization operation to the gray level image of the rectangular area of the 4th step, finds out the center of black region, That is the center of pupil, specially:Obtain the gray level image of eye areas first, then with gray value 30 be domain, by ocular into The result of row binaryzation, binaryzation becomes 0 for the gray value of pupil region, other regions are 255;By being to all gray values The coordinate of 0 point is averaged, you can obtains the center of pupil.
6th step, according to the variation of pupil center location and normal condition position, analyze the watching area of eyes of user, Specially:Record the ratio value of the position and face area of user pupil center in real time in the study stage;In the control stage, pass through Current region-of-interest is calculated in the variation of compared proportions value.
Ratio value obtains calculation formula:
The width of the abscissa of Rate_X=Liang Ge pupil center/face rectangular area
The height of the ordinate of Rate_Y=Liang Ge pupil center/face rectangular area
The comparison of ratio value and judgment method are:
Rate_X (current)-Rate_X (standard)>Bound_X && Rate_Y (current)-Rate_Y (standard)>bound_ Y → be just look at lower right region
Rate_X (current)-Rate_X (standard)<- bound_X && Rate_Y (current)-Rate_Y (standard)> Bound_Y → be just look at bottom-left quadrant
Rate_X (current)-Rate_X (standard)>Bound_X && Rate_Y (current)-Rate_Y (standard)<‐ Bound_Y → be just look at upper right side region
Rate_X (current)-Rate_X (standard)<- bound_X && Rate_Y (current)-Rate_Y (standard)<‐ Bound_Y → be just look at upper left region domain
Above-mentioned image coordinate system is laterally X-axis using the image upper left corner as coordinate origin, and longitudinal is Y-axis
Implementation result
According to above-mentioned steps, the present embodiment tests 5 male users and 3 female users.All experiments are being carried It is realized on the mobile phone of android4.4 operating systems, the major parameter of the mobile phone is:Central processing unit:Think Kirin 910T in sea (1.8GHz), memory 2GB.
The results show that all users, it is good in light, watch attentively using the locking eyes of user that can stablize Region, will produce error, error rate 9% once in a while.This experiment shows the locking human eye watching area method of the present embodiment Requirement can be effectively accomplished.

Claims (1)

1. a kind of method of the watching area of the locking eyeball based on mobile terminal, which is characterized in that by being adopted from video flowing Collect the face area identified by cascade classifier, and therefrom mark off eye estimation region, eye-shaped is obtained after subdivision Region simultaneously handles to obtain pupil center location by binarization of gray value, is obtained currently finally by the comparison with normal condition position Watching area;
The video flowing is obtained by the method that opencv for android image libraries are provided, and is used in layout Org.opencv.android.JavaCameraView controls, by existing to the setting of View controls in activity Each frame picture is obtained in onCameraFrame;
The face area is after cascade classifier further by way of moving average and fixed face area size It obtains;
The moving average refers to:The face area that preceding 3 frame picture recognition goes out is preserved, this recognition result is using current Recognition result and preceding recognition result three times average value, to realize the effect of face area stablized and identified;
The fixation face area size refers to:Due to user in operating handset at a distance from mobile phone it is substantially stationary, so It is sized to satisfy the use demand using fixed face area, and plays the role of stablizing the face area identified;
The division refers to:It is divided in the following way from face area and obtains eye estimation region:Images of left and right eyes rectangle region The upper edge in domain and the upper edge of face area are at a distance of height/3.7;The width of images of left and right eyes rectangular area is respectively width/3; Images of left and right eyes rectangular area is adjacent and center is on screen level direction;The height of images of left and right eyes rectangular area is Height/4, wherein:Height is the video height captured, and width is the video width captured, and unit is pixel, the rectangle The cascade classifier that region is provided using opencv for android image libraries, loads the grader file of images of left and right eyes;
The subdivision refers to:It is identified from eye estimation region using cascade classifier and obtains eye-shaped region;
The binarization of gray value is handled:The gray level image of eye areas is obtained first, then with gray value 30 is domain, by eye Portion region carries out binaryzation, and the result of binaryzation becomes 0 for the gray value of pupil region, other regions are 255;By to all The coordinate for the point that gray value is 0 is averaged, you can obtains the center of pupil;
The normal condition position refers to:Under normal condition, the position of pupil center and the ratio value of face area;
The comparison refers to:Record the ratio value of the position and face area of user pupil center in real time in the study stage; Current region-of-interest is calculated by comparing the variation of ratio value in the control stage;
The ratio value is:The width of the abscissa of Rate_X=Liang Ge pupil center/face rectangular area;Rate_Y=two The height of the ordinate of a pupil center/face rectangular area;
The comparison of ratio value and judgment method are:Rate_X (current)-Rate_X (standard)>Bound_X&&Rate_Y (current)- Rate_Y (standard)>Bound_Y → be just look at lower right region;Rate_X (current)-Rate_X (standard)<-bound_X&& Rate_Y (current)-Rate_Y (standard)>Bound_Y → be just look at bottom-left quadrant;Rate_X (current)-Rate_X (marks It is accurate)>Bound_X&&Rate_Y (current)-Rate_Y (standard)<- bound_Y → be just look at upper right side region;Rate_X (when Before)-Rate_X (standard)<- bound_X&&Rate_Y (current)-Rate_Y (standard)<- bound_Y → be just look at upper left side Region.
CN201510245605.1A 2015-05-14 2015-05-14 The method of the watching area of locking eyeball based on mobile terminal Active CN104850228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510245605.1A CN104850228B (en) 2015-05-14 2015-05-14 The method of the watching area of locking eyeball based on mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510245605.1A CN104850228B (en) 2015-05-14 2015-05-14 The method of the watching area of locking eyeball based on mobile terminal

Publications (2)

Publication Number Publication Date
CN104850228A CN104850228A (en) 2015-08-19
CN104850228B true CN104850228B (en) 2018-07-17

Family

ID=53849924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510245605.1A Active CN104850228B (en) 2015-05-14 2015-05-14 The method of the watching area of locking eyeball based on mobile terminal

Country Status (1)

Country Link
CN (1) CN104850228B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892691A (en) * 2016-06-07 2016-08-24 京东方科技集团股份有限公司 Method and device for controlling travel tool and travel tool system
CN106491073A (en) * 2016-10-20 2017-03-15 鹄誉医疗科技(上海)有限公司 A kind of human eye pupil detection method of quick high robust
CN108229252B (en) * 2016-12-15 2020-12-15 腾讯科技(深圳)有限公司 Pupil positioning method and system
CN107392120B (en) * 2017-07-06 2020-04-14 电子科技大学 Attention intelligent supervision method based on sight line estimation
CN107817899B (en) * 2017-11-24 2018-06-26 南京同睿信息科技有限公司 A kind of user watches content real-time analysis method
CN108828771A (en) * 2018-06-12 2018-11-16 北京七鑫易维信息技术有限公司 Parameter regulation means, device, wearable device and the storage medium of wearable device
CN109145864A (en) * 2018-09-07 2019-01-04 百度在线网络技术(北京)有限公司 Determine method, apparatus, storage medium and the terminal device of visibility region
CN109782913A (en) * 2019-01-10 2019-05-21 中科创达软件股份有限公司 A kind of method and device that control screen content is shown
CN110399808A (en) * 2019-07-05 2019-11-01 桂林安维科技有限公司 A kind of Human bodys' response method and system based on multiple target tracking
CN113836973A (en) * 2020-06-23 2021-12-24 中兴通讯股份有限公司 Terminal control method, device, terminal and storage medium
CN116527990B (en) * 2023-07-05 2023-09-26 深圳市康意数码科技有限公司 Intelligent control method and system for television playing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN101950200A (en) * 2010-09-21 2011-01-19 浙江大学 Camera based method and device for controlling game map and role shift by eyeballs
CN103034520A (en) * 2012-12-31 2013-04-10 广东欧珀移动通信有限公司 Method and system for starting applications
US20140267771A1 (en) * 2013-03-14 2014-09-18 Disney Enterprises, Inc. Gaze tracking and recognition with image location
US20140300538A1 (en) * 2013-04-08 2014-10-09 Cogisen S.R.L. Method for gaze tracking
CN104216508A (en) * 2013-05-31 2014-12-17 中国电信股份有限公司 Method and device for operating function key through eye movement tracking technique

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN101950200A (en) * 2010-09-21 2011-01-19 浙江大学 Camera based method and device for controlling game map and role shift by eyeballs
CN103034520A (en) * 2012-12-31 2013-04-10 广东欧珀移动通信有限公司 Method and system for starting applications
US20140267771A1 (en) * 2013-03-14 2014-09-18 Disney Enterprises, Inc. Gaze tracking and recognition with image location
US20140300538A1 (en) * 2013-04-08 2014-10-09 Cogisen S.R.L. Method for gaze tracking
CN104216508A (en) * 2013-05-31 2014-12-17 中国电信股份有限公司 Method and device for operating function key through eye movement tracking technique

Also Published As

Publication number Publication date
CN104850228A (en) 2015-08-19

Similar Documents

Publication Publication Date Title
CN104850228B (en) The method of the watching area of locking eyeball based on mobile terminal
CN110100251B (en) Apparatus, method, and computer-readable storage medium for processing document
CN112087574B (en) Enhanced image capture
CN106165391B (en) Enhanced image capture
CN110769158B (en) Enhanced image capture
CN109343700B (en) Eye movement control calibration data acquisition method and device
CN109375765B (en) Eyeball tracking interaction method and device
CN105472174A (en) Intelligent eye protecting method achieved by controlling distance between mobile terminal and eyes
US20110013829A1 (en) Image processing method and image processing apparatus for correcting skin color, digital photographing apparatus using the image processing apparatus, and computer-readable storage medium for executing the method
CN103472915B (en) reading control method based on pupil tracking, reading control device and display device
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
WO2016029399A1 (en) Object selection based on region of interest fusion
CN113627306B (en) Key point processing method and device, readable storage medium and terminal
CN112308797B (en) Corner detection method and device, electronic equipment and readable storage medium
CN109600555A (en) A kind of focusing control method, system and photographing device
US20150248221A1 (en) Image processing device, image processing method, image processing system, and non-transitory computer readable medium
JP2013186838A (en) Generation device, generation program, and generation method
Betancourt et al. Towards a unified framework for hand-based methods in first person vision
CN105430269A (en) Shooting method and apparatus applied to mobile terminal
CN106067167A (en) Image processing method and device
CN107436675A (en) A kind of visual interactive method, system and equipment
CN113284063A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN112101275B (en) Human face detection method, device, equipment and medium for multi-view camera
CN111967436B (en) Image processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190227

Address after: Room E201, Michigan College Building, 800 Dongchuan Road, Minhang District, Shanghai, 200240

Patentee after: Shanghai Jiaotong University Intellectual Property Management Co., Ltd.

Address before: 200240 No. 800, Dongchuan Road, Shanghai, Minhang District

Patentee before: Shanghai Jiao Tong University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190531

Address after: Room 201, Building 3, 930 Jianchuan Road, Minhang District, Shanghai, 200240

Patentee after: Shanghai Artificial Intelligence Research Institute Co., Ltd.

Address before: Room E201, Michigan College Building, 800 Dongchuan Road, Minhang District, Shanghai, 200240

Patentee before: Shanghai Jiaotong University Intellectual Property Management Co., Ltd.

TR01 Transfer of patent right