CN110032268A - A kind of real time human-machine interaction system and its working method based on Eye-controlling focus - Google Patents

A kind of real time human-machine interaction system and its working method based on Eye-controlling focus Download PDF

Info

Publication number
CN110032268A
CN110032268A CN201810026389.5A CN201810026389A CN110032268A CN 110032268 A CN110032268 A CN 110032268A CN 201810026389 A CN201810026389 A CN 201810026389A CN 110032268 A CN110032268 A CN 110032268A
Authority
CN
China
Prior art keywords
component
information
user
eye
real time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810026389.5A
Other languages
Chinese (zh)
Inventor
束齐展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810026389.5A priority Critical patent/CN110032268A/en
Publication of CN110032268A publication Critical patent/CN110032268A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of real time human-machine interaction system and its working method based on Eye-controlling focus, the system comprises data acquisition components, Data Analysis Services component, movement feedback component and communication and manipulation components, wherein, data acquisition components are used to track the face feature information of user and establish coordinate system;Data Analysis Services component is connect with data acquisition components, is analyzed and processed to the information of data acquisition components tracking;Movement feedback component is connect with Data Analysis Services component, and the information stored in the information and movement feedback component of the processing of comparative analysis Data Analysis Services component feeds back to communication and manipulation component;Communication and manipulation component are connect with movement feedback component, and receiving acts the feedback information that feedback component provides, and the movement of relevant device is controlled according to feedback information.According to the present invention, precision can be used by realizing higher identification, reduce production and application cost.

Description

A kind of real time human-machine interaction system and its working method based on Eye-controlling focus
Technical field
The present invention relates to human-computer interaction intelligent systems, and in particular to a kind of real time human-machine interaction system based on Eye-controlling focus And its working method.
Background technique
Commonly used with computer, artificial intelligence technology, man-machine interactive system start to grow up.Pass through man-machine friendship Mutual system is completed to exchange and communicate between people and computer, is to the full extent that people complete information by computer intelligence Management, the functions such as service and processing.Eye Tracking Technique is also referred to as eye-tracking technology, is each using electronics, machinery, optics etc. Kind detection means obtains the technology in the direction that user's Current vision pays attention to.Vision attention refers to the base for leading Eye Tracking Technique application Plinth, it embodies target area or the area-of-interest of user's attention concentration, therefore can be according to the vision of tracking user Path, to check the interested content of user.
Existing human-computer interaction is typically all complete by being manually entered or contacting the manually-operated mode of progress by body At Eye Tracking Technique is used in man-machine interactive system, the hand of people can be freed, sight line tracking system is by user Vision attention target point be input in computer the order for completing user in real time, in so avoiding tradition input set It is standby need gymnadenia conopsea and redundancy, and increase flexibility and interest that equipment uses.
When Eye Tracking Technique is applied to computer field creation sight line tracking system simulation mouse action, need pair Facial image is handled, is extracted the characteristic parameter of human eye, and eye space coordinate and computer screen coordinate are established in human eye modeling Mapping relations between point, and fed back in real time.There is similar techniques appearance outside Current Domestic, but valence need to be relied on mostly The high professional equipment of lattice or by external hardware realization target, and precision is mostly unsatisfactory.
Summary of the invention
In order to solve the above technical problems, on the one hand, the technical solution used in the present invention is:
A kind of real time human-machine interaction system based on Eye-controlling focus, it is characterised in that: at data acquisition components, data analysis Manage component, movement feedback component and communication and manipulation component;
The data acquisition components are used to track the face feature information of user, and are converted into data-signal input system, together When two-dimensional coordinate system and three-dimensional system of coordinate established according to face feature information respectively;
The Data Analysis Services component is connect with data acquisition components, for dividing the information that data acquisition components are tracked Analysis processing;
The movement feedback component is connect with Data Analysis Services component, for the processing of comparative analysis Data Analysis Services component The information stored in information and movement feedback component, and obtain feedback information and feed back to communication and manipulation component, it is logical to determine The control action of letter and manipulation component;
The communication and manipulation component are connect with movement feedback component, act the feedback information that feedback component provides for receiving, And the movement of relevant device is controlled according to feedback information.
Preferably, the data acquisition components include multiple cameras that the same space different location is arranged in, and make it can Capture the image information of the same space environment of different angle.
Preferably, the Data Analysis Services component includes that image processing module and facial characteristics know point location module, In,
Described image processing module is in real time pre-processing every frame image;
Facial characteristics knows point location module and pre-processes user's face and user's pupil coordinate point in the image of completion for identification With angle point inside and outside eye, and the video flowing that is made of multiple image is stored, the face feature point of user is identified by video flowing The change of variation and visual attention.
Preferably, pretreatment of the described image processing module to every frame image, including but not limited to: multiple images are synthesized For panoramic pictures, gray proces, Gaussian Blur, binary conversion treatment, prepared with the identification for after.
Preferably, the facial characteristics, which is known in point location module, is provided with many algorithms, with the face of user for identification The real-time tracing to user's vision attention target is realized in the variation of characteristic point and the change of visual attention.
Preferably, the movement feedback component includes face feature point database module and visual attention database mould Block, wherein the facial characteristics that a series of corresponding certain senses of user preset are preserved in face feature point database module is dynamic Make;The a series of special sight note in a fixed space environment of user preset is preserved in visual attention database module Meaning force, the special sight notice that force is stored up with three-dimensional coordinate point and the absolute of face towards angle in fixed space environment It deposits.
Preferably, the communication and manipulation component include Wi-Fi module, bluetooth communication, in infrared transmission module Any one is any several.
A kind of working method of the real time human-machine interaction system based on described in any of the above embodiments based on Eye-controlling focus, it is special Sign is, includes the following steps:
S1: data acquire and establish coordinate system: data acquisition components track the face feature information of user, and are converted into data letter Number;Establish the two-dimensional coordinate system established on the basis of user's face plane and respectively according to face feature information with empty where user Between the three-dimensional system of coordinate of foundation is put on the basis of environment position;
S2: data analysis: the information that Data Analysis Services component tracks data acquisition components is analyzed and processed;
S3: comparison determines: the information and movement feedback group of movement feedback component comparative analysis Data Analysis Services component processing The information stored in part exports Different Results to communication and manipulation component according to comparing result;
S4: the specific action of communication and manipulation component according to movement feedback component input, control relevant device progress corresponding operating.
Preferably, in step S1, pass through multiple camera shootings that the same space different location is set in data acquisition components Head shoots the space environment information picture of multiple different angles, as face feature information.
Preferably, step S2 includes:
S21: the space environment information picture that the image processing module in Data Analysis Services component shoots multiple different angles A space environment information panoramic pictures are synthesized by algorithm and do corresponding picture pretreatment;
S22: the facial characteristics in the Data Analysis Services component knows point location module and does face to the S21 panoramic pictures synthesized Positioning feature point, and store and saved into caching is interim.
Man-machine interactive system precision of the present invention is higher, according to the method for Eye Tracking Technique, in conjunction with Face datection Correlation theory reduce production and application cost with image processing means, manpower can be liberated and conveniently do more things.
Detailed description of the invention
Fig. 1 is the composition schematic diagram of present system.
Specific embodiment
In order to better illustrate the present invention, technical solution is made now in conjunction with specific embodiment and Figure of description further Explanation.Although describing these specific embodiments in embodiment, however, it is not to limit the invention, any affiliated skill Have usually intellectual in art field, without departing from the spirit and scope of the present invention, when can make some changes and embellishment, therefore The scope of protection of the present invention is defined by those of the claims.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, but the present invention can be with Implemented using other than the one described here other way, those skilled in the art can be without prejudice to intension of the present invention In the case of do similar popularization, therefore the present invention is not limited by the specific embodiments disclosed below.
Fig. 1 is the composition schematic diagram of the real time human-machine interaction system provided by the invention based on Eye-controlling focus, as schemed institute Show, the real time human-machine interaction system provided by the invention based on Eye-controlling focus, including data acquisition components, Data Analysis Services group Part, movement feedback component and communication and manipulation component.Wherein, the data acquisition components are connected with Data Analysis Services component; Movement feedback component is connect with Data Analysis Services component;Communication and manipulation component are connected with movement feedback component;Communication and behaviour Control component is connected with relevant device, i.e., corresponding intelligent electric appliance simultaneously.
The data acquisition components are used to track the face feature information of user, and are converted into data-signal and are input to system In, while coordinate system is established according to collected face feature information, the coordinate system includes two-dimensional coordinate system and three-dimensional coordinate System.Wherein, three-dimensional system of coordinate is put on the basis of the position of the space environment where it by user and is established;Two-dimensional coordinate system is with user Facial plane be benchmark plane establish.Data acquisition components include multiple cameras, and multiple cameras are separately mounted to same Different location in space guarantees that all cameras, can be with 360 ° of faces without dead angle trace trap user in working condition Portion's characteristic information, space herein are space locating for the users such as parlor, study or bedroom.To guarantee to track shooting effect, The camera selects high-resolution IP Camera.
The information that the Data Analysis Services component is used to track data acquisition components is analyzed and processed.Data analysis Multiple pictorial informations that processing component takes camera are pre-processed by specific algorithm, and multiple pictorial informations are closed As the panoramic pictures of space environment information, discriminance analysis is carried out to the facial characteristics in panoramic pictures image, and carry out interim Caching.Data Analysis Services component includes that image processing module and facial characteristics know point location module.Described image processing module For being pre-processed in real time to every frame image, including but not limited to: by plurality of pictures synthesize panoramic pictures, gray proces, Gaussian Blur, binary conversion treatment etc., the identification after being are prepared.The facial characteristics, which is known in point location module, is built-in with one kind Or many algorithms, for identifying user's face minutia and user's pupil coordinate point and eye in space environment panoramic pictures Inside and outside angle point, and the video flowing that multiframe panoramic pictures form in a period of time is stored, and then identify every frame image in the video flowing The change of the regularity variation and visual attention of the face feature point of middle user, is realized to the real-time of user's vision attention target Tracking.The face detail feature mainly includes multiple face feature points, these face feature points can according to precision and purposes into Row is selected, it is generally recognized that can mark 68 characteristic points when the facial information to people carries out Classification and Identification.
In fixed space environment, the target of visual attention carries out relative positioning using eye feature point.According to Family eye feature point coordinate tracks the variation for the direction and goal that user sees, as visual attention in the variation of a period of time The variation of coordinate.Three-dimensional system of coordinate in system of the present invention is during the position with user where it in space environment is Heart datum mark is established, this three-dimensional coordinate point is for positioning position of the user in the space environment, in this fixed space ring In border, user only changes visual attention coordinate points by the original places movement such as rotation head without displacement.Due to the three of system Tieing up coordinate system is to establish by fixed point of user, therefore three axis of three-dimensional system of coordinate are determining.It is real-time according to user's face The different location point occurred in every frame panoramic pictures can calculate user's face at this time towards angle, this is towards angle For user's face at this time pair direction and coordinate system x-axis angle between degree.Because three-dimensional system of coordinate is fixed, institute It can be described as again absolutely towards angle towards angle with this.
Eye feature point coordinate mainly includes the coordinate of pupil coordinate point and the inside and outside angle point of eye, respectively includes images of left and right eyes Pupil coordinate point and images of left and right eyes inside and outside angle point coordinate, 6 coordinate points altogether.The pupil coordinate point is in basis The center point coordinate for the eyeball in two-dimensional coordinate system established on the basis of the face plane oriented in panoramic pictures.Eye interior angle Point is the right end at user's left eye sclera and skin interface and the left end at user's right eye sclera and skin interface, exterior angle Point is then opposite.We are available for displacement relation according to pupil coordinate point and this 6 points of the inside and outside angle point of eye in a period of time The moving direction and process of user's visual attention.Such as in time point n, the left pupil coordinate of user is (x, y);Left eye interior angle Point is (x+1, y);The outer angle point of left eye is (x-1, y).And at time point (n+1), left pupil coordinate becomes (x+0.5, y).Then it is System can determine whether the visual attention of the user within this period of n to (n+1) to moving 0.5 list on the left of direction Position.And the length of each unit can be obtained according to the mean value computation of distance between the inside and outside angle point of user's right and left eyes, according to user Distance of the face apart from camera is different, and the mean value of distance can change between angle point inside and outside right and left eyes, gets over from camera Closely, the value is bigger, on the contrary then smaller, to change unit length, the thus user in deployment system and use process The scaling of the coordinate system unit caused by the distance with a distance from camera can be avoided.
Information and movement feedback group of the movement feedback component for the processing of comparative analysis Data Analysis Services component The information stored in part, and obtain feedback information and feed back to communication and manipulation component, to determine the manipulation of communication and manipulation component Movement.Acting feedback component includes face feature point database module and visual attention database module.The facial characteristics The facial characteristics that a series of corresponding certain senses of user preset are preserved in point data base module acts, such as rule blink, The movements such as dynamic eyebrow, each facial characteristics movement form a characteristic video stream by the movement of multiframe typical face.Visual attention The a series of special sight in a fixed space environment that user preset is preserved in database module pays attention to force, and by this Special sight notices that force is stored with three-dimensional coordinate point and the absolute of face towards angle in the fixed space environment.The view Feel a series of spy that attention database module can in real time by the visual attention coordinate points of user and storage in the database Different sight notices that the three-dimensional coordinate point in force compares, as the spy stored in the visual attention coordinate points of user and database When different sight notices that the three-dimensional coordinate point in force is overlapped and stablizes a period of time, the system trigger special sight pays attention to force Three-dimensional coordinate point corresponding movement feedback in the database.
The feedback information that the communication and manipulation component are provided for receiving movement feedback component, and according to feedback information control Make the movement of corresponding intelligent electric appliance.Communication and manipulation component include Wi-Fi module, bluetooth communication, infrared emission mould Block can select any one or any several carry out various combinations therein in actual use.
The present invention additionally provides a kind of working methods of real time human-machine interaction system based on above-mentioned Eye-controlling focus, including such as Lower step:
S1: data acquisition: the face feature information of the data acquisition components tracking user, and it is converted into data-signal, specifically When implementation, by multiple cameras that the same space different location is arranged in the facial characteristics of fixed space environmental information and user Information carries out real-time tracing shooting and carries out Image Acquisition, and these information are converted into data-signal input data analysis processing group In part.Meanwhile two-dimensional coordinate system and three-dimensional system of coordinate are established according to collected face feature information.Wherein, three-dimensional system of coordinate It is put and is established on the basis of the position of the space environment where it by user;Two-dimensional coordinate system is put down on the basis of the facial plane of user Face is established.
S2: data analysis: the information that the Data Analysis Services component tracks data acquisition components is analyzed and processed.
When it is implemented, the data analysis step specifically includes:
S21: the image that data collect is pre-processed by the image processing module in the Data Analysis Services component: The motion images of space environment information and user that multiple different angles are shot are synthesized into Zhang Quanjing's picture simultaneously by algorithm Do corresponding picture pretreatment, such as gray proces, Gaussian Blur, binary conversion treatment.
S22: point location module is known by the facial characteristics in Data Analysis Services component, pretreated panorama sketch is passed through to this Piece does facial characteristics point location, and stores into temporarily being saved in caching.Know in point location module in facial characteristics and is built-in with One or more algorithms identify in space environment panoramic pictures in user's face minutia and user's pupil coordinate point and eye Outer angle point, and the video flowing of the composition of the multiframe panoramic pictures in a period of time is stored, and then identify every frame image in the video flowing The change of the regularity variation and visual attention of the face feature point of middle user.
S3: comparison determines: the information and move that the movement feedback component comparative analysis Data Analysis Services component is handled Make the information stored in feedback component, Different Results are exported to communication and manipulation component according to comparing result.
By the video flowing being made of former frame panoramic pictures temporarily saved in caching and face feature point database module In database in a series of characteristic action of corresponding certain senses of user preset that saves compare, while by video flowing Obtained in a series of special sight attention in user's visual attention coordinate points and visual attention database module The three-dimensional coordinate point of point compares, and obtains different output results according to comparing result: if in facial characteristics point data base There is the characteristic action of high similarity, then exports specific action corresponding with this feature video flowing, such as according to this feature video Stream comparison obtains corresponding with the characteristic action turned on the aircondition saved in database, then exports result " turning on the aircondition " to corresponding intelligence It can electric appliance.If it is special to store in the current visual attention coordinate points of user and the database of visual attention database module When sight notices that the three-dimensional coordinate point in force is overlapped and stablizes a period of time, the three-dimensional that the special sight pays attention to force is exported Coordinate points in the database feed back by corresponding movement, such as when user stares TV switch, opens TV.If face feature point Do not have in the database for not having similar characteristic video stream or visual attention database module in the database of database module Similar special sight pays attention to the three-dimensional coordinate point in force, then recycles the judgement for carrying out next frame;
S4: the specific action of communication and manipulation component according to movement feedback component input, control relevant device progress corresponding operating. Output after being determined according to step S4 comparison is as a result, make corresponding actions by different communication and operational module, and be sent to Corresponding intelligent electric appliance.Such as output result is " turning on the aircondition ", then calls infrared transmission module to send a signal to air-conditioning execution and open Starting is made.
So far, the man-machine interactive system completes a working cycles.The working efficiency and accuracy of identification of system are limited to The hardware device that the various components of group COST system are chosen carries out 30 circulations or more when per second, that is, can guarantee system normal stream Smooth work.
To sum up, man-machine interactive system precision of the present invention is higher, according to the method for Eye Tracking Technique, in conjunction with face The correlation theory of human eye detection reduces production and application cost with image processing means, can liberate manpower and conveniently do more Thing.
Man-machine interactive system of the present invention also can contribute to the special population for helping hand to have disease and use calculating Machine or other intelligence machines, help are engaged in computer-related staff and are detached from keyboard and mouse, far from keyboard-hand with make The diseases such as the scapulohumeral periarthritis caused by computer.Man-machine interactive system of the present invention is also used as the terminal of home intelligent environment Management system, user can abandon remote controler by eye motion, the arbitrary intelligent electric appliance to family carry out control and Operation.
Man-machine interactive system of the present invention tracks user in fixed space environment with multiple cameras capture picture Face feature information input system has the image-capable and a variety of output functions of high complexity, in fixed space environment Middle real-time tracing and the coordinate for calculating the coordinate points of user's pupil and the inside and outside angle point of eye pass through coordinate identification regularity Variation determines the target of user's visual attention, realizes in fixed space to the real-time tracing of the sight of user.It is chased after in real time On the basis of track, the face feature point database module and visual attention database module in movement feedback component are according to preservation Preset data therein and the result of real-time tracing compare, and judge that user wants the I-goal reached and interactive action, A variety of output signals, finishing man-machine interaction are provided.
Protection scope of the present invention is not limited to the above embodiments, it is clear that those skilled in the art can be to this hair It is bright to carry out various changes and deformation without departing from scope and spirit of the present invention.If these changes and deformation belong to power of the present invention In the range of benefit requirement and its equivalent technologies, then including the intent of the present invention also includes these changes and deforms.

Claims (10)

1. a kind of real time human-machine interaction system based on Eye-controlling focus, it is characterised in that: analyzed including data acquisition components, data Processing component, movement feedback component and communication and manipulation component;
The data acquisition components are used to track the face feature information of user, and are converted into data-signal input system, together When two-dimensional coordinate system and three-dimensional system of coordinate established according to face feature information respectively;
The Data Analysis Services component is connect with data acquisition components, for dividing the information that data acquisition components are tracked Analysis processing;
The movement feedback component is connect with Data Analysis Services component, for the processing of comparative analysis Data Analysis Services component The information stored in information and movement feedback component, and obtain feedback information and feed back to communication and manipulation component, it is logical to determine The control action of letter and manipulation component;
The communication and manipulation component are connect with movement feedback component, act the feedback information that feedback component provides for receiving, And the movement of relevant device is controlled according to feedback information.
2. the real time human-machine interaction system according to claim 1 based on Eye-controlling focus, it is characterised in that: the data are adopted Collecting component includes multiple cameras that the same space different location is arranged in, and makes it that can capture the same space environment of different angle Image information.
3. the real time human-machine interaction system according to claim 1 based on Eye-controlling focus, it is characterised in that: the data point Analysis processing component includes that image processing module and facial characteristics know point location module, wherein
Described image processing module is in real time pre-processing every frame image;
Facial characteristics knows point location module and pre-processes user's face and user's pupil coordinate point in the image of completion for identification With angle point inside and outside eye, and the video flowing that is made of multiple image is stored, the face feature point of user is identified by video flowing The change of variation and visual attention.
4. the real time human-machine interaction system according to claim 3 based on Eye-controlling focus, it is characterised in that: at described image Pretreatment of the module to every frame image is managed, including but not limited to: multiple images are synthesized into panoramic pictures, gray proces, Gauss Fuzzy, binary conversion treatment, is prepared with the identification for after.
5. according to the real time human-machine interaction system as claimed in claim 3 based on Eye-controlling focus, it is characterised in that: the facial characteristics Know in point location module and be provided with many algorithms, with changing for the variation of the face feature point of user for identification and visual attention Become, realizes the real-time tracing to user's vision attention target.
6. the real time human-machine interaction system according to claim 1 based on Eye-controlling focus, it is characterised in that: the movement is anti- Presenting component includes face feature point database module and visual attention database module, wherein facial characteristics point data base mould A series of facial characteristics movement of corresponding certain senses of user preset is preserved in block;It is protected in visual attention database module The a series of special sight in a fixed space environment for having user preset pays attention to force, and the special sight pays attention to force It is stored with three-dimensional coordinate point and the absolute of face towards angle in fixed space environment.
7. the real time human-machine interaction system according to claim 1 based on Eye-controlling focus, it is characterised in that: it is described communication and Manipulation component include Wi-Fi module, bluetooth communication, in infrared transmission module any one or it is any several.
8. a kind of working method of the real time human-machine interaction system based on described in any of the above embodiments based on Eye-controlling focus, feature It is, includes the following steps:
S1: data acquire and establish coordinate system: data acquisition components track the face feature information of user, and are converted into data letter Number;Establish the two-dimensional coordinate system established on the basis of user's face plane and respectively according to face feature information with empty where user Between the three-dimensional system of coordinate of foundation is put on the basis of environment position;
S2: data analysis: the information that Data Analysis Services component tracks data acquisition components is analyzed and processed;
S3: comparison determines: the information and movement feedback group of movement feedback component comparative analysis Data Analysis Services component processing The information stored in part exports Different Results to communication and manipulation component according to comparing result;
S4: the specific action of communication and manipulation component according to movement feedback component input, control relevant device progress corresponding operating.
9. the working method of the real time human-machine interaction system according to claim 8 based on Eye-controlling focus, it is characterised in that: In step S1, multiple different angles are shot by multiple cameras that the same space different location is arranged in data acquisition components The space environment information picture of degree, as face feature information.
10. the working method of the real time human-machine interaction system according to claim 9 based on Eye-controlling focus, feature exist In: step S2 includes:
S21: the space environment information picture that the image processing module in Data Analysis Services component shoots multiple different angles A space environment information panoramic pictures are synthesized by algorithm and do corresponding picture pretreatment;
S22: the facial characteristics in the Data Analysis Services component knows point location module and does face to the S21 panoramic pictures synthesized Positioning feature point, and store and saved into caching is interim.
CN201810026389.5A 2018-01-11 2018-01-11 A kind of real time human-machine interaction system and its working method based on Eye-controlling focus Pending CN110032268A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810026389.5A CN110032268A (en) 2018-01-11 2018-01-11 A kind of real time human-machine interaction system and its working method based on Eye-controlling focus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810026389.5A CN110032268A (en) 2018-01-11 2018-01-11 A kind of real time human-machine interaction system and its working method based on Eye-controlling focus

Publications (1)

Publication Number Publication Date
CN110032268A true CN110032268A (en) 2019-07-19

Family

ID=67234248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810026389.5A Pending CN110032268A (en) 2018-01-11 2018-01-11 A kind of real time human-machine interaction system and its working method based on Eye-controlling focus

Country Status (1)

Country Link
CN (1) CN110032268A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160364774A1 (en) * 2015-06-10 2016-12-15 Richard WITTSIEPE Single action multi-dimensional feedback graphic system and method
CN106354264A (en) * 2016-09-09 2017-01-25 电子科技大学 Real-time man-machine interaction system based on eye tracking and a working method of the real-time man-machine interaction system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160364774A1 (en) * 2015-06-10 2016-12-15 Richard WITTSIEPE Single action multi-dimensional feedback graphic system and method
CN106354264A (en) * 2016-09-09 2017-01-25 电子科技大学 Real-time man-machine interaction system based on eye tracking and a working method of the real-time man-machine interaction system

Similar Documents

Publication Publication Date Title
US20210177124A1 (en) Information processing apparatus, information processing method, and computer-readable storage medium
CN106354264A (en) Real-time man-machine interaction system based on eye tracking and a working method of the real-time man-machine interaction system
Lian et al. Multiview multitask gaze estimation with deep convolutional neural networks
Damen et al. You-Do, I-Learn: Egocentric unsupervised discovery of objects and their modes of interaction towards video-based guidance
WO2020125499A1 (en) Operation prompting method and glasses
KR102441171B1 (en) Apparatus and Method for Monitoring User based on Multi-View Face Image
Medioni et al. Identifying noncooperative subjects at a distance using face images and inferred three-dimensional face models
CN109086706A (en) Applied to the action identification method based on segmentation manikin in man-machine collaboration
Wang et al. Realtime and accurate 3D eye gaze capture with DCNN-based iris and pupil segmentation
Sun et al. Real-time gaze estimation with online calibration
CN111914629A (en) Method, apparatus, device and storage medium for generating training data for face recognition
WO2023155533A1 (en) Image driving method and apparatus, device and medium
Daoudi et al. A new computational approach to identify human social intention in action
Bhuiyan et al. On tracking of eye for human-robot interface
Sun et al. Eye tracking and ROI detection within a computer screen using a monocular camera
CN110543813B (en) Face image and gaze counting method and system based on scene
Liu et al. A robust hand tracking for gesture-based interaction of wearable computers
Lee et al. Motion recognition and recovery from occluded monocular observations
Thomas et al. A comprehensive review on vision based hand gesture recognition technology
Bhowmick et al. A Framework for Eye-Based Human Machine Interface
CN110032268A (en) A kind of real time human-machine interaction system and its working method based on Eye-controlling focus
Baulig et al. Adapting egocentric visual hand pose estimation towards a robot-controlled exoskeleton
CN106354263A (en) Real-time man-machine interaction system based on facial feature tracking and working method of real-time man-machine interaction system
Panduranga et al. Dynamic hand gesture recognition system: a short survey
Jain et al. [POSTER] AirGestAR: Leveraging Deep Learning for Complex Hand Gestural Interaction with Frugal AR Devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190719

WD01 Invention patent application deemed withdrawn after publication