CN105893993A - Intelligent glasses - Google Patents

Intelligent glasses Download PDF

Info

Publication number
CN105893993A
CN105893993A CN201610403269.3A CN201610403269A CN105893993A CN 105893993 A CN105893993 A CN 105893993A CN 201610403269 A CN201610403269 A CN 201610403269A CN 105893993 A CN105893993 A CN 105893993A
Authority
CN
China
Prior art keywords
intelligent glasses
module
user interface
user
present user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610403269.3A
Other languages
Chinese (zh)
Inventor
郑文辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Chuanglongzhixin Technology Co Ltd
Original Assignee
Shenzhen Chuanglongzhixin Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chuanglongzhixin Technology Co Ltd filed Critical Shenzhen Chuanglongzhixin Technology Co Ltd
Priority to CN201610403269.3A priority Critical patent/CN105893993A/en
Publication of CN105893993A publication Critical patent/CN105893993A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to intelligent glasses. The intelligent glasses acquire an environmental image through an image acquisition module, a face and characters in the environmental image are extracted by a detection module, an identification module identifies the face and/or the characters to extract identity information corresponding to the face and semantic information corresponding to the characters, and an output module broadcasts the identity information and the semantic information to a specified user, so that the user can obtain the identity information corresponding to a figure and the semantic information corresponding to the characters appearing in the current environment through the intelligent glasses, and the user can conveniently know the current environment very well in time. A touch module can provide convenience for the user to slide left and right to change a user interface, and single click and double clicks are carried out on the user interface to be zoomed.

Description

Intelligent glasses
Technical field
The present invention relates to intelligent glasses, particularly relate to a kind of user of being suitable to and obtain the intelligent glasses of environmental information.
Background technology
In daily life, people can meet the people once recognized, but cannot accurately remember its identity information, this The situation of kind bring deleterious effect, if instead people can obtain this in time can to social activity or commercial activity The name of people, can allow people in social or commercial activity more freely and self-confident.It addition, when people run into Stranger, and when the friend of oneself may recognize this stranger, how to be obtained the letter of this stranger by friend Breath is also urgent problem.
Furthermore, people, in social activity or business and personal travel, can encounter the language of a lot of various country Speech, the menu such as occurred with English, German, Japanese, Korean etc., price list, contract etc., it is also possible to Encounter much with linguistic informations etc. such as the airport of foreign language report, railway station, subways.This can allow some can not essence Logical multi-lingual people brings inconvenience.
Summary of the invention
Based on this, it is necessary to provide a kind of user of being suitable to obtain the intelligent glasses of environmental information.
A kind of intelligent glasses, including lens body, described lens body includes spectacle frame and eyeglass, also includes: Image collection module, detection module, identification module and output module;Described image collection module, described inspection Survey module, described identification module and described output module to may be contained within described lens body;
Described image collection module is used for obtaining ambient image;
Described detection module is used for receiving described ambient image, and extracts the face in described ambient image, literary composition Word;
Described identification module is corresponding for described face or described word are identified extract described face Identity information, the semantics information that described word is corresponding;
Described output module is arranged on described lens body, and for by described identity information, the described meaning of one's words Information or described identity information are together with described semantics information and at least one the following use playing to specify Family:
Described face;
Described word;
Described ambient image.
Wherein in an embodiment, also include the touch-control module being connected with described output module, described touch-control Module is for obtaining the touch command of user, and described touch command includes that prearranged gesture slip instruction, click refer to Order;
When if the contact that described detection module detects is slided with prearranged gesture, instruction is corresponding, described output mould The present user interface of described intelligent glasses is transformed into neighboring user interface by block;And
When if the contact that described detection module detects is not slided with described prearranged gesture, instruction is corresponding, described Described intelligent glasses is maintained at present user interface by output module;And/or
If the contact that described detection module detects is corresponding with described click commands, described output module will The present user interface of described intelligent glasses zooms in and out process;And
If the contact that described detection module detects is not corresponding with described click commands, described output module Described intelligent glasses is maintained at present user interface.
Wherein in an embodiment, described detection module is additionally operable to detect the object near described touch-control module Motion;
Described output module detects described motion for response, and translates in a first direction to user's broadcasting The present user interface of described intelligent glasses;
If reaching the edge of described intelligent glasses in a first direction during translation present user interface, and in institute State and described object still detected near touch-control module, then present user interface shows beyond described intelligent glasses Marginal area;And
After described object no longer being detected near described touch-control module, described output module is for user Play and translate described present user interface in a second direction, until no longer display shows beyond described intelligent glasses The marginal area shown.
Wherein in an embodiment, described output module is for playing from translating in a first direction to user Change over translation in a second direction until no longer showing the marginal area exceeding described intelligent glasses and showing, making The edge obtaining present user interface seems the edge being attached to described intelligent glasses of elasticity.
Wherein in an embodiment, described output module arrives for playing present user interface to user Before the marginal area that described intelligent glasses shows, present user interface translation in a first direction have with Before arriving the edge of described intelligent glasses, corresponding the first associated shift distance of the move distance of object, And, it is showed more than the region at described intelligent glasses edge and comprises and in a first direction present user interface is put down Moving the second associated shift distance, wherein, described second associated shift distance is less than arriving present user interface The move distance of object after edge.
Wherein in an embodiment, described output module arrives described intelligent glasses for playing to user Before the edge of display, to have first corresponding to the movement rate of object relevant for translation in a first direction Point-to-point speed, and wherein, be showed more than the region at present user interface edge comprise in a first direction with Second associated shift speed translation present user interface, wherein, described second associated shift speed is slower than first Associated shift speed.
Wherein in an embodiment, described spectacle frame is for wearing rear ear formula.
Wherein in an embodiment, described eyeglass is movably set on described spectacle frame, and described eyeglass is in advance Position adjustments is carried out in the range of Ding.
Wherein in an embodiment, also including voice module, described voice module is for receiving the language of user Sound input signal, and described voice input signal is converted into word.
Wherein in an embodiment, also include that display module, described display module are LED display, QLED Display screen and OLED display screen.
Above-mentioned intelligent glasses obtains ambient image by image collection module, then by detection module extraction environment figure Face in Xiang, word, face and/or word are identified extracting the identity that face is corresponding by identification module The semantics information that information, word are corresponding, identity information, semantics information are played to specify user by output module. Therefore, user can by above-mentioned intelligent glasses obtain identity information corresponding to the personage occurred in current environment, The semantics information that word is corresponding, facilitates user to know current environment in time.
Touch-control module can facilitate user to realize the change user interface that horizontally slips, and clicks double-click to user circle Face zooms in and out operation.
Accompanying drawing explanation
Fig. 1 is the module map of intelligent glasses.
Detailed description of the invention
For the ease of understanding the present invention, below with reference to relevant drawings, the present invention is described more fully. Accompanying drawing gives the preferred embodiment of the present invention.But, the present invention can come in many different forms Realize, however it is not limited to embodiment described herein.On the contrary, provide the purpose of these embodiments be make right The understanding of the disclosure is more thorough comprehensively.
It should be noted that when element is referred to as " being fixed on " another element, and it can be directly at another On individual element or element placed in the middle can also be there is.When an element is considered as " connection " another yuan Part, it can be directly to another element or may be simultaneously present centering elements.Used herein Term " vertical ", " level ", "left", "right" and similar statement simply to illustrate that mesh 's.
Unless otherwise defined, all of technology used herein and scientific terminology and the technology belonging to the present invention The implication that the technical staff in field is generally understood that is identical.The art used the most in the description of the invention Language is intended merely to describe the purpose of specific embodiment, it is not intended that in limiting the present invention.Used herein Term " and/or " include the arbitrary and all of combination of one or more relevant Listed Items.
As it is shown in figure 1, be the module map of intelligent glasses.
A kind of intelligent glasses, including lens body, described lens body includes spectacle frame and eyeglass, also includes: Image collection module 102, detection module 104, identification module 106 and output module 108;Described image obtains Delivery block 102, described detection module 104, described identification module 106 and described output module 108 are respectively provided with On described lens body.
Described image collection module 102 is used for obtaining ambient image.
Described detection module 104 is used for receiving described ambient image, and extract the face in described ambient image, Word.
Described identification module 106 is corresponding for described face or word are identified extract described face Identity information, the semantics information that described word is corresponding.
Described output module 108 is arranged on described lens body, and for by described identity information, described Semantics information or described identity information and described semantics information and at least one the following use playing to specify Family:
Described face;
Described word;
Described ambient image.
Image collection module 102 is used for obtaining ambient image, and image collection module 102 can simply be photographic head Or the device including photographic head.Detection module 104, identification module 106 and output module 108 and image As acquisition module 102, it is arranged on lens body.In other embodiments, detection module 104 and knowledge Other module 106 can not be connected with lens body, say, that detection module 104 and identification module 106 Exist and outside lens body.
Detection module 104 is connected with image collection module 102 communication, and detection module 104 obtains ambient image, Detection the face of extraction environment image, word.
Identification module 106 is connected with detection module 102 communication, is identified obtaining people to face, word Identity information that face is corresponding, the semantics information that word is corresponding.Output module 108 and detection module 104 communication Connect, output module 108 by identity information, semantics information can each individually and with following the most at least it One combines plays to the user that specifies.The information of output module 108 output necessarily includes semantics information, with Time can also include foreign language image, ambient image, its combination
It is to say, the information of output module 108 output necessarily includes identity information, can also include simultaneously Face, ambient image.When translating word,
The information of information output apparatus 22 output necessarily includes mother tongue information, can also include outer texts and pictures simultaneously Picture, ambient image, its combination is referred to hereafter identity information and facial image, the knot of ambient image Conjunction mode, only need to replace with facial image foreign language image and identity information replaces with mother tongue information, Hereafter will not be described in great detail.Such as, when only identifying a stranger or forget the acquaintance of name, can only letter Single is supplied to user by identity information.And when the large contingent identified, now need identity information and people Face image, ambient image combine, and help user to be unlikely to obscure, hereafter this will do the most detailed chatting State.
According to the present invention, encounter once understanding but when cannot tell the situation of name, the image of the present invention obtains Delivery block 102 is controlled actively surrounding to be shot and then obtained to comprise this human head picture by user Ambient image, identify this people including name by detection module 104, identification module 106 Identity information, and this identity information is sent to user, improve the convenience of social activity or commercial activity.
When meeting stranger, it is the most right that the image collection module 102 of the present invention is controlled to carry out active by user formerly Surrounding carries out shooting and then obtaining the ambient image comprising this human head picture, and this ambient image is sent out Give posterior user.This posterior user passively receive transmit come ambient image, by detection module 102, Identification module 106 identifies and i.e. recognizes this stranger, then by this people identity information including name Feed back to user formerly, improve the convenience of social activity.It can be seen that user can be Image Acquisition mould The user of intelligent glasses belonging to block 102 can also be maybe the user of other intelligent glasses.
As the user of intelligent glasses belonging to image collection module 102, image collection module 102 is by user Control actively surrounding shot and then obtain ambient image.As the user for other intelligent glasses Time, image collection module 102 is controlled passively to receive the environment that transmission comes by the user of other intelligent glasses Image.
As it has been described above, when identifying fewer in number, can play to such as to use this simple mode of word User, identify number more time, it is possible to use the mode that voice, image, video and word combine come to User points out.
In one embodiment, intelligent glasses also includes the touch-control module being connected with described output module 108, Described touch-control module for obtaining the touch command of user, described touch command include prearranged gesture slide instruction, Click commands;
When instruction is corresponding if the contact that described detection module 104 detects is slided with prearranged gesture, export mould The present user interface of described intelligent glasses is transformed into neighboring user interface by block 108;And
If the contact that described detection module 104 detects is not slided with described prearranged gesture, instruction is corresponding, will Described intelligent glasses is maintained at present user interface;And/or
If the contact that described detection module 104 detects is corresponding with described click commands, by described Brilliant Eyes The present user interface of mirror zooms in and out process;And
If the contact that described detection module 104 detects is not corresponding with described click commands, by described intelligence Glasses are maintained at present user interface.
Intelligent glasses can have multiple user interface.User interface state is that intelligent glasses comes in a predefined manner The input of response user or output state.In certain embodiments, in certain embodiments, the plurality of use Family interface state includes the state for multiple application.At present user interface, intelligent glasses energising is the most permissible Operation, but ignore major part user if not all input.In other words, intelligent glasses is not Any action can be taked in response to user's input, and/or intelligent glasses will not respond to input execution in user Scheduled operation set.This scheduled operation set can include the navigation between user interface, and predetermined function Set activation or disable.This present user interface can be used for preventing from using unintentionally or without permission intelligence Glasses, or activate or disable the function on intelligent glasses.When intelligent glasses is in present user interface, It may be said that intelligent glasses is locked.In certain embodiments, the intelligent glasses being in the lock state can be to having User's input set of limit is closed and is responded, and these inputs include and intelligent glasses is transformed into present user interface Attempt corresponding input, or with by input corresponding for intelligent glasses power-off.In other words, locking Intelligent glasses can to intelligent glasses is transformed into neighboring user interface attempt corresponding user input with And respond with by corresponding for intelligent glasses power-off user's input, but will not to user interface it Between navigation attempt corresponding user input and respond.Even if should be appreciated that intelligent glasses ignores user Input, when input being detected, intelligent glasses still can provide a user with the described input of instruction by uncared-for Sensory feedback (such as vision, audition or vibrational feedback).
In intelligent glasses comprises the embodiment of touch screen, when intelligent glasses locks, it is impossible in response in intelligence Can glasses when being locked with contacting of touch screen and perform scheduled operation set, such as leading between user interface Boat.In other words, when the intelligent glasses that contact is locked is ignored, it may be said that this touch screen is locked.But It is that the contact of the limited kinds of touch screen still can be responded by locking intelligent glasses.Described limited kinds Determine into and corresponding the connecing of trial that intelligent glasses is transformed into neighboring user interface including by intelligent glasses Touch.
After user interface unlocks, intelligent glasses is in its normal operating conditions, detects and responds those and use User's input that family interface alternation is corresponding.The intelligent glasses being in released state can be described as unlocking Intelligent glasses.The intelligent glasses unlocked detects and responds those for navigation, data between user interface Input and activate or disable user's input of function.In intelligent glasses comprises the embodiment of touch screen, solve The intelligent glasses detection of lock and the navigation between responding that those are performed and user interface by touch screen, number According to input, and activate or disable the contact that function is corresponding.
Intelligent glasses is transformed into released state.Used herein from a kind of state to the conversion of another kind of state Refer to from a kind of state to the process of another kind of state.As user understands, this process can be Instant, close to instant, progressive or be in any suitable speed.Once process is activated, that Process schedule can be automatically controlled by intelligent glasses, independent of user, or can also be by user's control.
Intelligent glasses is arranged on lock-out state.Once meet any in one or more locking condition Individual, intelligent glasses can be arranged (being namely transformed into lock-out state completely from any other state) in lock Determine state.These locking conditions can include following event, such as, have passed through the predetermined non-activity time, enter Enter active call, or equipment energising.These locking conditions can also include user intervention, namely user Inputted by predesignated subscriber and lock intelligent glasses.In some embodiments it is possible to allow user's regulation to serve as The event of locking condition.Such as, intelligent glasses can be configured by user, in order to through predetermined nothing It is transformed into lock-out state when of activity time, but does not enter lock-out state when intelligent glasses is energized.
In certain embodiments, the intelligent glasses of locking can show on the touchscreen and one or more is available for user Perform to unlock the visual cues solving latching operation of intelligent glasses.The one or more visual cues can to Family provides hint or the prompting solving latching operation.These visual cues can be text, figure or its combination in any. In certain embodiments, the when of there is particular event when intelligent glasses locks, show visual cues.Touch Send out the particular event that show of visual cues to may include that and receive message, or may the appropriate user of need note certain Other events a little.
In certain embodiments, this visual cues can also show when specific user inputs, such as with Family is mutual with menu button, and user contacts the touch screen of locking, and/or user sets with any other input/control For the when of mutual.When not showing visual cues, touch screen power-off can (be had by the intelligent glasses of locking Help save electric power), or show that other objects, such as screen protection program or user can on the touchscreen The information (battery dump energy, date and time, network strength etc.) that energy meeting is interested.
Unlocking motion includes contacting with touch screen.In certain embodiments, unlocking motion is on the touchscreen The prearranged gesture performed.Gesture used herein is the motion of the object contacted with touch screen.Such as, this is pre- Determine gesture to be included in left hand edge and contact with touch screen (initialization gesture), continue with touch screen keeping While contact, contact point is moved horizontally to relative edge, and (completes in relative edge interruption contact This gesture).
When touch screen lock timing, user can come into contact with touch screen, i.e. touch this touch screen.For the ease of Illustrating, in other embodiments, the contact on touch screen will be described as being used at least one hands by user And use one or more finger to perform.It would be appreciated that this contact can also use any suitable When object or oneself part carry out, such as indicate pen, finger etc..Described contact may include that and touching On screen once or many under rap, keep and touch screen continuous contact, move while keeping continuous contact Contact point, interrupts contact, or its combination in any.Contact on intelligent glasses detection touch screen.If this connects Touch not corresponding with the trial performing unlocking motion, if or this contact and user perform unlocking motion failure Or the trial abandoned is corresponding, then intelligent glasses will keep locking.Such as, if unlocking motion be With while touch screen continuous contact, contact point is moved horizontally across touch screen, and the contact detected is to touch That touches on screen a series of raps at random, then, owing to described contact is not corresponding with unlocking motion, equipment will Keep locking.If what this contact was corresponding is the successful execution of unlocking motion, i.e. user's successful execution is understood Lock action, then intelligent glasses is transformed into released state.Such as, if solving latching operation is to hold with touch screen Contact point is moved horizontally across touch screen while touching by continued access, and the contact detected is in contact continuously In the case of move horizontally, then this intelligent glasses will transition to released state.
The detailed process present user interface of intelligent glasses being zoomed in and out process by click commands is as follows:
Detection module 104 is additionally operable to detect the motion of the object near described touch-control module;
Described output module 108 detects described motion for response, and plays in a first direction to user Translate the present user interface of described intelligent glasses;
If reaching the edge of described intelligent glasses in a first direction during translation present user interface, and in institute State and described object still detected near touch-control module, then present user interface shows beyond described intelligent glasses Marginal area;And
After described object no longer being detected near described touch-control module, described output module 108 for User plays and translates described present user interface in a second direction, until no longer display is beyond described Brilliant Eyes The marginal area that mirror shows.
Output module 108 is put down in a second direction for playing to change over from translation in a first direction to user Move until the marginal area that no longer display shows beyond described intelligent glasses makes the edge of present user interface see The edge being attached to described intelligent glasses of elasticity of getting up.
Output module 108 arrives, for playing present user interface to user, the limit that described intelligent glasses shows Before edge region, present user interface translation in a first direction have with arrive described intelligent glasses limit Before edge, corresponding the first associated shift distance of the move distance of object, and, be showed more than described intelligence The region of lens periphery can comprise in a first direction present user interface is translated the second associated shift distance, Wherein, described second associated shift distance less than arrive present user interface edge after object motion away from From.
Output module 108 is for playing before arriving the edge that described intelligent glasses shows first to user Translation on direction has the first associated shift speed corresponding with the movement rate of object, and, display The region exceeding present user interface edge comprises current with the second associated shift speed translation in a first direction User interface, wherein, described second associated shift speed is slower than the first associated shift speed.
Therefore, intelligent glasses can realize the left and right cunning of present user interface and neighboring user interface by touch-control Dynamic.Or by clicking or double-click the scaling realized user interface.
Spectacle frame is for wearing rear ear formula.The profile making intelligent glasses is suitable to user and wears, and improves Consumer's Experience.
Eyeglass is movably set on described spectacle frame, and described eyeglass carries out position adjustments in preset range.Cause And location regulation can be carried out according to the needs of user, convenient and simple may conform to every user's eye positions.
Intelligent glasses also includes voice module, and described voice module is used for receiving the voice input signal of user, And described voice input signal is converted into word.Voice module uses highly sensitive speech capture so that voice Input the most accurate.
Intelligent glasses also include display module, described display module be LED display, QLED display screen and OLED display screen.Use above-mentioned display screen so that the display of intelligent glasses is the most bright and colourful, carries for user For the finest and the smoothest display picture.
Above-mentioned intelligent glasses also can independently use without other equipment of arranging in pairs or groups.
Above-mentioned intelligent glasses uses overlength continuation of the journey battery so that user uses the time longer.
Above-mentioned intelligent glasses obtains ambient image by image collection module 102, then is carried by detection module 104 Taking the face in ambient image, word, face and/or word are identified extracting people by identification module 106 Identity information that face is corresponding, the semantics information that word is corresponding, output module 108 is by identity information, meaning of one's words letter Breath plays to specify user.Therefore, user can obtain appearance in current environment by above-mentioned intelligent glasses Identity information corresponding to personage, the semantics information that word is corresponding, facilitate user to know current environment in time.
Each technical characteristic of embodiment described above can combine arbitrarily, for making description succinct, the most right The all possible combination of each technical characteristic in above-described embodiment is all described, but, if these skills There is not contradiction in the combination of art feature, is all considered to be the scope that this specification is recorded.
Embodiment described above only have expressed the several embodiments of the present invention, and it describes more concrete and detailed, But can not therefore be construed as limiting the scope of the patent.It should be pointed out that, for this area For those of ordinary skill, without departing from the inventive concept of the premise, it is also possible to make some deformation and change Entering, these broadly fall into protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be with appended power Profit requires to be as the criterion.

Claims (10)

1. an intelligent glasses, including lens body, described lens body includes spectacle frame and eyeglass, and it is special Levy and be, also include: image collection module, detection module, identification module and output module;Described image Acquisition module, described detection module, described identification module and described output module may be contained within described glasses originally On body;
Described image collection module is used for obtaining ambient image;
Described detection module is used for receiving described ambient image, and extracts the face in described ambient image, literary composition Word;
Described identification module is corresponding for described face or described word are identified extract described face Identity information, the semantics information that described word is corresponding;
Described output module is arranged on described lens body, and for by described identity information, the described meaning of one's words Information or described identity information are together with described semantics information and at least one the following use playing to specify Family:
Described face;
Described word;
Described ambient image.
Intelligent glasses the most according to claim 1, it is characterised in that also include and described output module The touch-control module connected, described touch-control module is for obtaining the touch command of user, and described touch command includes Prearranged gesture slip instruction, click commands;
When if the contact that described detection module detects is slided with prearranged gesture, instruction is corresponding, described output mould The present user interface of described intelligent glasses is transformed into neighboring user interface by block;And
When if the contact that described detection module detects is not slided with described prearranged gesture, instruction is corresponding, described Described intelligent glasses is maintained at present user interface by output module;And/or
If the contact that described detection module detects is corresponding with described click commands, described output module will The present user interface of described intelligent glasses zooms in and out process;And
If the contact that described detection module detects is not corresponding with described click commands, described output module Described intelligent glasses is maintained at present user interface.
Intelligent glasses the most according to claim 2, it is characterised in that described detection module is additionally operable to inspection Survey the motion of object near described touch-control module;
Described output module detects described motion for response, and translates in a first direction to user's broadcasting The present user interface of described intelligent glasses;
If reaching the edge of described intelligent glasses in a first direction during translation present user interface, and in institute State and described object still detected near touch-control module, then present user interface shows beyond described intelligent glasses Marginal area;And
After described object no longer being detected near described touch-control module, described output module is for user Play and translate described present user interface in a second direction, until no longer display shows beyond described intelligent glasses The marginal area shown.
Intelligent glasses the most according to claim 2, it is characterised in that described output module for Family is play and is changed over translation in a second direction from translation in a first direction until no longer display is beyond described intelligence Can the marginal area that show of glasses, the edge that makes present user interface seem elasticity be attached to described intelligence The edge of energy glasses.
Intelligent glasses the most according to claim 2, it is characterised in that described output module for Present user interface is play before arriving the marginal area that described intelligent glasses shows, present user interface in family Translation in a first direction have with arrive described intelligent glasses edge before, the move distance phase of object The first corresponding associated shift distance, and, it is showed more than the region at described intelligent glasses edge and is included in the On one direction, present user interface is translated the second associated shift distance, wherein, described second associated shift away from The move distance of object after the edge less than arrival present user interface.
Intelligent glasses the most according to claim 2, it is characterised in that described output module for The translation that family was play before arriving the edge that shows of described intelligent glasses in a first direction has and object The first associated shift speed that movement rate is corresponding, and wherein, it is showed more than present user interface edge Region comprise in a first direction with second associated shift speed translation present user interface, wherein, described Second associated shift speed is slower than the first associated shift speed.
Intelligent glasses the most according to claim 1, it is characterised in that described spectacle frame is for wearing rear ear Formula.
Intelligent glasses the most according to claim 1, it is characterised in that described eyeglass is movably set in institute Stating on spectacle frame, described eyeglass carries out position adjustments in preset range.
Intelligent glasses the most according to claim 1, it is characterised in that also include voice module, described Voice module is for receiving the voice input signal of user, and described voice input signal is converted into word.
Intelligent glasses the most according to claim 1, it is characterised in that also include display module, institute Stating display module is LED display, QLED display screen and OLED display screen.
CN201610403269.3A 2016-06-07 2016-06-07 Intelligent glasses Pending CN105893993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610403269.3A CN105893993A (en) 2016-06-07 2016-06-07 Intelligent glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610403269.3A CN105893993A (en) 2016-06-07 2016-06-07 Intelligent glasses

Publications (1)

Publication Number Publication Date
CN105893993A true CN105893993A (en) 2016-08-24

Family

ID=56711515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610403269.3A Pending CN105893993A (en) 2016-06-07 2016-06-07 Intelligent glasses

Country Status (1)

Country Link
CN (1) CN105893993A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485220A (en) * 2016-10-11 2017-03-08 广州市和佳电子科技有限公司 Face identification method, the intelligent glasses with face identification functions and server
CN107463910A (en) * 2017-08-09 2017-12-12 中国电子科技集团公司第二十八研究所 Target identification augmented reality system based on GoogleGlass
CN107490880A (en) * 2017-09-19 2017-12-19 触景无限科技(北京)有限公司 Multi-functional glasses
CN108052935A (en) * 2018-01-30 2018-05-18 深圳智达机械技术有限公司 A kind of intelligent glasses with identification function
CN108694394A (en) * 2018-07-02 2018-10-23 北京分音塔科技有限公司 Translator, method, apparatus and the storage medium of recognition of face
CN109241900A (en) * 2018-08-30 2019-01-18 Oppo广东移动通信有限公司 Control method, device, storage medium and the wearable device of wearable device
CN109841013A (en) * 2019-01-16 2019-06-04 北京悠购智能科技有限公司 For detecting the mthods, systems and devices for not settling accounts commodity

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101542424A (en) * 2007-01-07 2009-09-23 苹果公司 List scrolling and document translation, scaling, and rotation on a touch-screen display
CN103941915A (en) * 2014-04-04 2014-07-23 百度在线网络技术(北京)有限公司 Intelligent glasses system and control method thereof
CN104238726A (en) * 2013-06-17 2014-12-24 腾讯科技(深圳)有限公司 Intelligent glasses control method, intelligent glasses control device and intelligent glasses
CN104597625A (en) * 2013-10-30 2015-05-06 富泰华工业(深圳)有限公司 Intelligent glasses
CN104880835A (en) * 2015-05-13 2015-09-02 浙江吉利控股集团有限公司 Intelligent glasses
CN204791062U (en) * 2015-07-17 2015-11-18 天津电眼科技有限公司 Face recognition system based on intelligence glasses

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101542424A (en) * 2007-01-07 2009-09-23 苹果公司 List scrolling and document translation, scaling, and rotation on a touch-screen display
CN104238726A (en) * 2013-06-17 2014-12-24 腾讯科技(深圳)有限公司 Intelligent glasses control method, intelligent glasses control device and intelligent glasses
CN104597625A (en) * 2013-10-30 2015-05-06 富泰华工业(深圳)有限公司 Intelligent glasses
CN103941915A (en) * 2014-04-04 2014-07-23 百度在线网络技术(北京)有限公司 Intelligent glasses system and control method thereof
CN104880835A (en) * 2015-05-13 2015-09-02 浙江吉利控股集团有限公司 Intelligent glasses
CN204791062U (en) * 2015-07-17 2015-11-18 天津电眼科技有限公司 Face recognition system based on intelligence glasses

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485220A (en) * 2016-10-11 2017-03-08 广州市和佳电子科技有限公司 Face identification method, the intelligent glasses with face identification functions and server
CN107463910A (en) * 2017-08-09 2017-12-12 中国电子科技集团公司第二十八研究所 Target identification augmented reality system based on GoogleGlass
CN107490880A (en) * 2017-09-19 2017-12-19 触景无限科技(北京)有限公司 Multi-functional glasses
CN108052935A (en) * 2018-01-30 2018-05-18 深圳智达机械技术有限公司 A kind of intelligent glasses with identification function
CN108694394A (en) * 2018-07-02 2018-10-23 北京分音塔科技有限公司 Translator, method, apparatus and the storage medium of recognition of face
CN109241900A (en) * 2018-08-30 2019-01-18 Oppo广东移动通信有限公司 Control method, device, storage medium and the wearable device of wearable device
CN109241900B (en) * 2018-08-30 2021-04-09 Oppo广东移动通信有限公司 Wearable device control method and device, storage medium and wearable device
CN109841013A (en) * 2019-01-16 2019-06-04 北京悠购智能科技有限公司 For detecting the mthods, systems and devices for not settling accounts commodity

Similar Documents

Publication Publication Date Title
CN105893993A (en) Intelligent glasses
US11907739B1 (en) Annotating screen content in a mobile environment
CN111492328B (en) Non-verbal engagement of virtual assistants
Sun et al. Lip-interact: Improving mobile device interaction with silent speech commands
Ye et al. Current and future mobile and wearable device use by people with visual impairments
Lee et al. Towards augmented reality driven human-city interaction: Current research on mobile headsets and future challenges
CN112805671A (en) Limited operation of electronic devices
WO2016018488A9 (en) Systems and methods for discerning eye signals and continuous biometric identification
Wobbrock Situationally aware mobile devices for overcoming situational impairments
CN104200145A (en) Embedded authentication systems in an electronic device
CN103064512A (en) Technology of using virtual data to change static printed content into dynamic printed content
CN104169838A (en) Eye tracking based selectively backlighting display
US10939033B2 (en) Systems and methods for directing adaptive camera systems
CN107658016A (en) The Nounou intelligent guarding systems accompanied for health care for the aged
Wang et al. BlyncSync: enabling multimodal smartwatch gestures with synchronous touch and blink
Ascari et al. Personalized interactive gesture recognition assistive technology
Lee et al. Towards augmented reality-driven human-city interaction: Current research and future challenges
Monekosso et al. Intelligent environments: methods, algorithms and applications
Pushp et al. PrivacyShield: A mobile system for supporting subtle just-in-time privacy provisioning through off-screen-based touch gestures
Hupont et al. Use case cards: A use case reporting framework inspired by the european AI act
US20180356973A1 (en) Method And System For Enhanced Touchscreen Input And Emotional Expressiveness
Halonen Interaction Design Principles for Industrial XR
Bourguet Uncertainty and error handling in pervasive computing: a user’s perspective
US20240061918A1 (en) Techniques to provide user authentication for a near-eye display device
Akpınar Context-Aware Prediction of User Performance Problems Caused by the Situationally-Induced Impairments and Disabilities

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160824