CN103713737B - Virtual keyboard system used for Google glasses - Google Patents
Virtual keyboard system used for Google glasses Download PDFInfo
- Publication number
- CN103713737B CN103713737B CN201310684110.XA CN201310684110A CN103713737B CN 103713737 B CN103713737 B CN 103713737B CN 201310684110 A CN201310684110 A CN 201310684110A CN 103713737 B CN103713737 B CN 103713737B
- Authority
- CN
- China
- Prior art keywords
- intelligent glasses
- staff
- virtual keyboard
- main body
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
The invention discloses virtual keyboard system used for Google glasses. The virtual keyboard system is characterized in that a hand tracking system and a self-positioning system are integrated in a Google glasses body; the hand tracking system is used for detecting three-dimensional positioning information and spatial position and angle of the hand; the self-positioning system is fixedly connected onto the Google glasses body and used for acquiring real-time natural movement data of the head and calculating spatial direction, displacement and speed of real-time movement of the head; the Google glasses body collects the information of the hand tracking system and the positioning system and calculating real-time relative movement data. Error responses of an interactive system can be effectively avoided through dependence mapping of a physical plane and a screen; feeling of fatigue resulted in dangling arm operation can be effectively relieved through hand operation on the physical plane; similar to a traditional one, an interactive operating interface is satisfactory to operation habit of an operator and convenient to use; by the introduction of the self-positioning system, the operation displacement problem caused by head movement of a wearer can be effectively removed.
Description
Technical field
The present invention relates to artificial intelligence field, be specifically related to a kind of virtual keyboard system for intelligent glasses.
Background technology
In recent years, the exploitation of intelligence wearable device is more and more hotter, and intelligence wearable device includes the products such as intelligent watch, intelligent glasses, Intelligent bracelet.Wherein, intelligent glasses is that similar smart mobile phone is the same, there is independent operating system, the program that can be provided by software houses such as user installation software, game, manipulate adding schedule, digital map navigation and good friend's interaction, shooting photo and video by voice or action and friend has launched the functions such as video calling, it is possible to realized the general name of the class glasses that wireless network accesses by mobile communication network.
Google Glass, it is a " augmented reality " glasses issued in April, 2012 by Google, there is the function the same with smart mobile phone, taken pictures by sound control, video calling and distinguish direction and surf the web, process Word message and Email etc..But the mode carrying out man-machine interaction at present with intelligent glasses is mainly motion-captured by interactive voice or people's eye, and other more preferable interactive modes can not allow intelligent glasses respond the command signal of people.
Interactive voice technology is due to the limitation of its input signal, and one as prior art is supplemented, can not be fully solved the problem that people interacts with computer.And can only by the voice command single of people interrupted send order to computer, it is impossible to rapid as mouse, allow people and computer carry out command interaction accurately, it is impossible to meet control signal input requirements fast and accurately.
The motion-captured control of people's eye is intelligent glasses to be responded accordingly by catching the actions such as rotation and the nictation of people's eyeball, and control model is relatively limited, and is easily generated the shortcomings such as feeling of fatigue when also existing that operation is not accurate enough, use.
Traditional man-machine interaction mode is more to use the body language of people to send command signal to computer, such as with finger by mouse send a signal to computer, controlled game by health, touch screen uses the click transmission signal etc. of finger.Its development is more ripe, and the most classical configuration is exactly the keyboard of notebook computer, the scheme of touch pad combination, it is possible to achieve the input of convenient and efficient.
For the intelligent glasses that can have an X-rayed, need provide a kind of the most efficient and be difficult to the interaction schemes making people produce feeling of fatigue, so as to make intelligent glasses can be operated as a kind of independent working cell, so can greatly expand the range of application of intelligent glasses, it might even be possible to use as a PC carried with.
Use the keyboard layout being similar to traditional notebook computer, build the virtual keyboard system of a kind of emulation around intelligent glasses, be not only adapted to the use habit of people, and be advantageously implemented the most efficient interactive operation.The problem being easily generated feeling of fatigue as the unsettled operation of staff, then can be by the object (such as desk, books even wall etc.) with plane that generally exists in life as supporting body, staff carries out corresponding action in the plane of this type objects, and staff action is mapped on dummy keyboard thus is formed and simulate interactive environment accordingly.
At number of patent application 201310263439.9, in " the intelligent glasses system of a kind of man-machine interaction and exchange method ", a kind of input method proposed: utilize binocular infrared facility that the staff before being suspended on eye carries out three-dimensional tracing and positioning thus obtain gesture motion as input, be suitable for button operation is carried out for the unsettled screen simulated in intelligent glasses.
Said method has a following deficiency: 1, human body has a certain degree of swing naturally, the head of such as people is not absolute rest under natural posture, but real-time have some to rock, this amplitude rocked and direction are unpredictable, when staff carries out unsettled operation, if now the head of people creates rocking of specific direction, and staff is the most dynamic, it is possible that binocular infrared facility tracks palmistry " click " action for head, binocular device identification is glasses projection screen and the click action of the relative motion of hands, and the motion that cannot determine hands causes or naturally swinging of head part is caused, easily cause false triggering thus affect control quality, if and people keeps high concentration, static guarantee of trying one's best accurately inputs head, easily causes feeling of fatigue and absentminded.
2, three-dimensional manipulating is carried out owing to hand needs are unsettled, the problem that long-time operation is easily generated feeling of fatigue cannot be avoided, in individual product's real experiences, using the three-dimension gesture mode of operation that Leap Motion equipment is similar, after usual 10 minutes, arm can produce sense of substantially aching.
3, do not provide pervasive precise manipulation interface, it is impossible to meet diversified interactive requirements, be only provided that complementary operation input.
From the point of view of above analysis, the scheme proposed in number of patent application 201310263439.9 is applicable to operational scenario that is provisional or that do not have sensitive natur, has been not particularly suited for frequent interaction demand and has required that the most accurate mutual intelligent glasses uses environment.
On market, existing intelligent glasses is generally by interactive voice or people's eye is motion-captured responds, and has the disadvantage that: 1, interactive voice has limitation, can only discontinuously carry out input operation, it is impossible to carries out the most accurately input;2, interactive voice is easily generated feeling of fatigue, it is impossible to adapt to long interactive requirements;3, interactive voice is suitable as the interactive mode of complementarity, is difficult to carry out independent pure interactive voice;4, people's eye is motion-captured is easily generated maloperation, and human eye has the action of not autonomy to produce, and easily causes false triggering;5, the motion capture of people's eye easily makes people produce feeling of fatigue, is unfavorable for carrying out the most alternately;6, the pattern of people's eye motion capture is less, is unfavorable for the manipulation of complexity.
Summary of the invention
The invention reside in and solve the above-mentioned technical problem that prior art exists, it is provided that a kind of virtual keyboard system for intelligent glasses.
Technical scheme includes a kind of virtual keyboard system for intelligent glasses, and including intelligent glasses main body, staff tracing system and self aligning system, described staff tracing system and self aligning system are integrated in described intelligent glasses main body;Described staff tracing system is used for detecting staff three-dimensional localization information and locus, angle, and sends described intelligent glasses main body to;Described self aligning system is fixedly connected in described intelligent glasses main body, for obtaining the real-time natural motion data of head part, calculates the direction in space of real-time natural motion of head part, displacement, speed, and sends described intelligent glasses main body to;Described intelligent glasses main body gathers described staff tracing system and the information of alignment system, calculates real-time relative motion data.
Preferably, described intelligent glasses main body includes intelligent glasses framework, display module, computing module;Described intelligent glasses framework is used for staff tracing system and self aligning system, it is provided that the carrier of all modules of system;Described display module is used for providing projection display, and provides the display of simulating keyboard and touch pad;Described computing module gathers described staff tracing system and the information of alignment system, calculates real-time relative motion data, is effectively operated information.
Preferably, described projection shows and includes main viewing area, dummy keyboard viewing area and touch pad viewing area.
Preferably, described staff tracing system includes binocular camera shooting module and plane detecting module, and described binocular camera shooting module is for detecting the depth of field image of staff, it is thus achieved that the three-dimensional localization information of staff, and sends described intelligent glasses main body to;Described planar detector module is for detecting locus and the angle of plane residing for staff, and sends described intelligent glasses main body to.
Preferably, described binocular camera shooting module includes two photographic head and binocular camera shooting processing unit, and described binocular camera shooting processing unit gathers the image of described photographic head, uses stereo algorithm to calculate the position of finger fingertip.
Preferably, described planar detector module is fixed by least 3 and is constituted towards different diastimeters.
Preferably, described diastimeter uses laser range finder or infrared range-measurement system.
Preferably, described self aligning system includes that alignment sensor and positioning unit, described alignment sensor are fixedly connected in described intelligent glasses main body, for obtaining the real-time natural motion data of head part;Described locating module obtains the data of described alignment sensor, calculates the direction in space of real-time natural motion of head part, displacement, speed, and sends described intelligent glasses main body to.
Preferably, described alignment sensor uses gyroscope or accelerometer.
The beneficial effect comprise that and can be prevented effectively from interactive system response by mistake by the relationship maps of physical plane Yu screen;Staff operates on physical plane, can effectively alleviate because unsettled arm operates the feeling of fatigue caused;There is provided and be similar to traditional interactive operation interface, it is possible to adapt to the operating habit of operator, it is simple to use;By the introducing of self aligning system, can effectively eliminate the operation offset problem caused due to the head movement of people.
Accompanying drawing explanation
Fig. 1 is the workflow diagram of the virtual keyboard system of the embodiment of the present invention.
Fig. 2 is the binocular camera stereoscopic localized schematic diagram of the virtual keyboard system of the embodiment of the present invention.
Fig. 3 is the three-dimensional planar location schematic diagram of the virtual keyboard system of the embodiment of the present invention.
Fig. 4 is the display output area schematic diagram of the virtual keyboard system of the embodiment of the present invention.
Detailed description of the invention
With specific embodiment, the present invention is described in further detail below in conjunction with the accompanying drawings.
The present invention, based on intelligent glasses, carries out the functional realiey of intelligent glasses virtual keyboard system by integrated staff tracing system, self aligning system.
The embodiment of the present invention provides a kind of virtual keyboard system for intelligent glasses, as shown in Figure 1, including intelligent glasses main body 1, staff tracing system 2 and self aligning system 3, this virtual keyboard system operating environment can select the physical plane of conventional object in life, requirement is nontransparent, can be desk, smooth books, wall, windowsill, stool and the plane of similar nontransparent object.
Staff tracing system 2 and self aligning system 3 are integrated in intelligent glasses main body 1;Staff tracing system 2 is used for detecting staff three-dimensional localization information and locus, angle, and sends intelligent glasses main body 1 to;Self aligning system 3 is fixedly connected in intelligent glasses main body 1, for obtaining the real-time natural motion data of head part, calculates the direction in space of real-time natural motion of head part, displacement, speed, and sends intelligent glasses main body 1 to;Intelligent glasses main body 1 gathers the information of staff tracing system 2 and alignment system, calculates real-time relative motion data.
This virtual keyboard system needs first to carry out pre-determined bit before using, and in use staff operation on selected physical plane is mapped in the dummy keyboard being projected out, and produces corresponding interaction effect.
The embodiment of the present invention, can be prevented effectively from interactive system response by mistake by the relationship maps of physical plane Yu screen;Staff operates on physical plane, can effectively alleviate because unsettled arm operates the feeling of fatigue caused;There is provided and be similar to traditional interactive operation interface, it is possible to adapt to the operating habit of operator, it is simple to use;By the introducing of self aligning system 3, can effectively eliminate the operation offset problem caused due to the head movement of people.
Intelligent glasses main body 1 includes intelligent glasses framework, display module, computing module;Intelligent glasses framework uses the basic framework of intelligent glasses, for staff tracing system 2 and self aligning system 3, it is provided that the carrier of all modules of system;
Intelligent glasses framework includes the Physical architecture main body of the intelligent glasses commonly used, microprocessor, independent current source, wireless connection device and projection display equipment.
Display module is used for providing projection display, and provides the display of simulating keyboard and touch pad;Computing module gathers staff tracing system 2 and the information of alignment system, calculates real-time relative motion data, is effectively operated information.
Projection display includes main viewing area, dummy keyboard viewing area and touch pad viewing area.Its by virtual staff according to the position display of actual staff above interactive areas.
Preferably, staff tracing system 2 includes providing staff to position binocular camera shooting module and the plane detecting module of information relative to the 3D of intelligent glasses, binocular camera shooting module is for detecting the depth of field image of staff, it is thus achieved that the three-dimensional localization information of staff, and sends intelligent glasses main body 1 to;Planar detector module is for detecting locus and the angle of plane residing for staff, and sends intelligent glasses main body 1 to.
Binocular camera shooting module, when virtual keyboard system works, the both hands finger tip of operator is carried out space orientation and tracking, including two photographic head and binocular camera shooting processing unit, the image of binocular camera shooting processing unit acquisition camera, stereo algorithm is used to calculate the position of finger fingertip.
Needs according to virtual keyboard system, this binocular camera shooting module realizes the finger tip of the finger of the both hands of people is carried out real-time positioning tracking, two photographic head of module use stereo algorithm to calculate the real time position of finger fingertip according to two images detected, ultimate principle is as shown in Figure 2, when following the trail of judgement finger tip and contacting with the physical plane used in certain position, the effect of virtual keyboard system is finger and presses at corresponding virtual location, produces and inputs response accordingly.
Parallax d=xl-xr=fb/z,
Wherein, f is focal length, and b is baseline, and z is fore-and-aft distance.
Owing to system can follow the trail of ten fingers simultaneously, therefore can support that 10 fingers are pressed simultaneously and input.After judging that staff moves to touchpad position, same, staff touch on the table, slide and the operation such as double-click, operation identical under being equivalent to do on the virtual touchpad under dummy keyboard.
Planar detector module provides used physical plane to position information relative to the 3D of intelligent glasses, is fixed by least 3 and constitutes towards different diastimeters, and three points that above-mentioned diastimeter is oriented come the locus of the physical plane that positioning action environment is used.
Preferably, diastimeter uses laser range finder or infrared range-measurement system.
Planar detector module provides the used physical plane posture information relative to intelligent glasses, the light that each diastimeter sends can produce an intersection point with physical plane, three will produce three intersection points, and the range data of these three intersection point relative diastimeter group respectively can be oriented in space according to the data of diastimeter, the direction of the most each point and distance i.e. can determine that a little relative to the locus of diastimeter group after determining, data according to three points i.e. can determine that this physical plane relative tertiary location relative to diastimeter group, as shown in Figure 3, i.e. can get physical plane and position information relative to the 3D of intelligent glasses.
Self aligning system 3 goes out the disturbance naturally swinging produced binocular camera due to head part for the real-time detection when virtual keyboard system works, it is to avoid because the motion of head part causes the skew that system positions, including alignment sensor and positioning unit.
Alignment sensor is fixedly connected in intelligent glasses main body 1, for obtaining the real-time natural motion data of head part;Locating module obtains the data of alignment sensor, calculates the direction in space of real-time natural motion of head part, displacement, speed, and sends intelligent glasses main body 1 to.
Preferably, alignment sensor uses gyroscope or accelerometer.
Embodiment of the present invention work process:
1, before virtual keyboard system entrance work, first choosing nontransparent physical entity plane smooth at, it is to avoid anaclasis causes the disorder of tracing system, both hands are positioned in this plane naturally with the posture of operation with traditional keyboard;
2, after starting virtual keyboard system, this default pre-determined bit, in order to hands, plane and corresponding initial alignment are provided.The completing of this process comprises staff relative to the location of intelligent glasses and the physical plane of use relative to the location of intelligent glasses.
3, after initial alignment completes, this system can normally work, simulation hands in virtual keyboard system moves accordingly by strictly moving according to the position of staff, if finger performs hammer action, simulate hands in simulating keyboard, perform corresponding hammer action, hands can also be moved to touch operation region equally, the control of touch pad can be simulated, as shown in Figure 4.Strictly move according to staff respond be for head part naturally swing, people's swing of head when interactive operation will not affect the action simulating hands in analog image.Computing module obtains the information of head movement in real time according to self aligning system 3, according to the real-time motion data of head with the real-time relative motion data deduction head of hands, i.e. can get effective operation information of hand, thus realizes accurately interactive operation.
The embodiment of the present invention has the effect that
1., interactive system response by mistake can be prevented effectively from by the relationship maps of physical plane Yu screen;
2., staff operate on physical plane, can effectively alleviate because unsettled arm operates the feeling of fatigue that causes;
3., offer is similar to traditional interactive operation interface, it is possible to adapt to the operating habit of operator, it is simple to use;
4., by glasses position oneself the introducing of system, can effectively eliminate the operation offset problem caused due to the head movement of people;
5., can the most easily intelligent glasses be used as the stand-alone terminal possessing complete input function, the beneficially popularization and application of intelligent glasses.
The detailed description of the invention of present invention described above, is not intended that limiting the scope of the present invention.Any according to other changes accordingly various done by the technology design of the present invention and deformation, should be included in the protection domain of the claims in the present invention.
Claims (8)
1. the virtual keyboard system for intelligent glasses, it is characterised in that include intelligence
Lens body, staff tracing system and self aligning system, described staff tracing system and making by oneself
The position system integration is in described intelligent glasses main body;Described staff tracing system is used for detecting people
Hands three-dimensional localization information and locus, angle, and send described intelligent glasses main body to;
Described self aligning system is fixedly connected in described intelligent glasses main body, is used for obtaining head part
Real-time natural motion data, calculate the direction in space of real-time natural motion of head part, position
Shifting, speed, and send described intelligent glasses main body to;Described intelligent glasses main body gathers institute
State the information of staff tracing system and alignment system, calculate real-time relative motion data;
Wherein, described staff tracing system includes binocular camera shooting module and plane detecting module,
Described binocular camera shooting module is for detecting the depth of field image of staff, it is thus achieved that the three-dimensional localization of staff
Information, and send described intelligent glasses main body to;Described planar detector module is used for detecting people
The locus of plane residing for hands and angle, and send described intelligent glasses main body to.
2. the virtual keyboard system for intelligent glasses as claimed in claim 1, its feature
Being, described intelligent glasses main body includes intelligent glasses framework, display module, computing module;
Described intelligent glasses framework is used for staff tracing system and self aligning system, it is provided that system owns
The carrier of module;Described display module is used for providing projection display, and provide simulating keyboard and
The display of touch pad;Described computing module gathers described staff tracing system and alignment system
Information, calculates real-time relative motion data, is effectively operated information.
3. the virtual keyboard system for intelligent glasses as claimed in claim 2, its feature
Being, described projection shows and includes main viewing area, dummy keyboard viewing area and touch pad
Viewing area.
4. the virtual keyboard system for intelligent glasses as claimed in claim 1, its feature
Being, described binocular camera shooting module includes two photographic head and binocular camera shooting processing unit, institute
State binocular camera shooting processing unit and gather the image of described photographic head, use stereo algorithm to calculate hands
Refer to the position of finger tip.
5. the virtual keyboard system for intelligent glasses as claimed in claim 1, its feature
Being, described planar detector module is fixed by least 3 and is constituted towards different diastimeters.
6. the virtual keyboard system for intelligent glasses as claimed in claim 5, its feature
Being, described diastimeter uses laser range finder or infrared range-measurement system.
7. the virtual keyboard system for intelligent glasses as claimed in claim 1, its feature
Being, described self aligning system includes alignment sensor and positioning unit, described orientation sensing
Device is fixedly connected in described intelligent glasses main body, for obtaining the fortune the most naturally of head part
Dynamic data;Described locating module obtains the data of described alignment sensor, calculates head part's
The direction in space of natural motion, displacement, speed in real time, and send described intelligent glasses master to
Body.
8. the virtual keyboard system for intelligent glasses as claimed in claim 7, its feature
Being, described alignment sensor uses gyroscope or accelerometer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310684110.XA CN103713737B (en) | 2013-12-12 | 2013-12-12 | Virtual keyboard system used for Google glasses |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310684110.XA CN103713737B (en) | 2013-12-12 | 2013-12-12 | Virtual keyboard system used for Google glasses |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103713737A CN103713737A (en) | 2014-04-09 |
CN103713737B true CN103713737B (en) | 2017-01-11 |
Family
ID=50406776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310684110.XA Active CN103713737B (en) | 2013-12-12 | 2013-12-12 | Virtual keyboard system used for Google glasses |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103713737B (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10234952B2 (en) * | 2014-07-18 | 2019-03-19 | Maxim Integrated Products, Inc. | Wearable device for using human body as input mechanism |
CN104216517A (en) * | 2014-08-25 | 2014-12-17 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104345632A (en) * | 2014-11-06 | 2015-02-11 | 湖州高鼎智能科技有限公司 | Intelligent watch capable of making call |
CN105718038B (en) * | 2014-12-05 | 2018-10-19 | 北京蚁视科技有限公司 | A kind of gesture gesture recognition equipment for near-eye display device |
CN104597971B (en) * | 2014-12-06 | 2018-04-03 | 许昌学院 | A kind of wearable computer |
CN105094675B (en) * | 2015-07-28 | 2019-04-02 | 中国联合网络通信集团有限公司 | A kind of man-machine interaction method and touch screen wearable device |
CN105159539B (en) * | 2015-09-10 | 2018-06-01 | 京东方科技集团股份有限公司 | Touch-control response method, device and the wearable device of wearable device |
KR20180057693A (en) * | 2015-09-24 | 2018-05-30 | 토비 에이비 | Eye wearable wearable devices |
EP4060462A1 (en) * | 2016-05-20 | 2022-09-21 | Magic Leap, Inc. | Contextual awareness of user interface menus |
CN108064372A (en) * | 2016-12-24 | 2018-05-22 | 深圳市柔宇科技有限公司 | Head-mounted display apparatus and its content input method |
CN106781841A (en) * | 2017-01-20 | 2017-05-31 | 东莞市触梦网络科技有限公司 | A kind of AR religion picture devices and its religion picture system |
JP2018124651A (en) * | 2017-01-30 | 2018-08-09 | セイコーエプソン株式会社 | Display system |
US10908419B2 (en) | 2018-06-28 | 2021-02-02 | Lucyd Ltd. | Smartglasses and methods and systems for using artificial intelligence to control mobile devices used for displaying and presenting tasks and applications and enhancing presentation and display of augmented reality information |
CN109872519A (en) * | 2019-01-13 | 2019-06-11 | 上海萃钛智能科技有限公司 | A kind of wear-type remote control installation and its remote control method |
CN109976636B (en) * | 2019-03-19 | 2021-04-16 | 北京华捷艾米科技有限公司 | AR touch method, device and system and AR equipment |
USD900203S1 (en) | 2019-03-22 | 2020-10-27 | Lucyd Ltd. | Smart glasses |
USD900204S1 (en) | 2019-03-22 | 2020-10-27 | Lucyd Ltd. | Smart glasses |
USD900206S1 (en) | 2019-03-22 | 2020-10-27 | Lucyd Ltd. | Smart glasses |
USD899493S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899495S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899499S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899497S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899500S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD900920S1 (en) | 2019-03-22 | 2020-11-03 | Lucyd Ltd. | Smart glasses |
USD899498S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD899494S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD900205S1 (en) | 2019-03-22 | 2020-10-27 | Lucyd Ltd. | Smart glasses |
USD899496S1 (en) | 2019-03-22 | 2020-10-20 | Lucyd Ltd. | Smart glasses |
USD954135S1 (en) | 2019-12-12 | 2022-06-07 | Lucyd Ltd. | Round smartglasses having flat connector hinges |
USD955467S1 (en) | 2019-12-12 | 2022-06-21 | Lucyd Ltd. | Sport smartglasses having flat connector hinges |
USD958234S1 (en) | 2019-12-12 | 2022-07-19 | Lucyd Ltd. | Round smartglasses having pivot connector hinges |
USD954136S1 (en) | 2019-12-12 | 2022-06-07 | Lucyd Ltd. | Smartglasses having pivot connector hinges |
USD954137S1 (en) | 2019-12-19 | 2022-06-07 | Lucyd Ltd. | Flat connector hinges for smartglasses temples |
USD974456S1 (en) | 2019-12-19 | 2023-01-03 | Lucyd Ltd. | Pivot hinges and smartglasses temples |
US11282523B2 (en) | 2020-03-25 | 2022-03-22 | Lucyd Ltd | Voice assistant management |
JP2023512122A (en) * | 2020-12-30 | 2023-03-24 | 広東視場科技有限公司 | smart control glasses |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673161A (en) * | 2009-10-15 | 2010-03-17 | 复旦大学 | Visual, operable and non-solid touch screen system |
CN102799318A (en) * | 2012-08-13 | 2012-11-28 | 深圳先进技术研究院 | Human-machine interaction method and system based on binocular stereoscopic vision |
CN102906623A (en) * | 2010-02-28 | 2013-01-30 | 奥斯特豪特集团有限公司 | Local advertising content on an interactive head-mounted eyepiece |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9069164B2 (en) * | 2011-07-12 | 2015-06-30 | Google Inc. | Methods and systems for a virtual input device |
US8217856B1 (en) * | 2011-07-27 | 2012-07-10 | Google Inc. | Head-mounted display that displays a visual representation of physical interaction with an input interface located outside of the field of view |
-
2013
- 2013-12-12 CN CN201310684110.XA patent/CN103713737B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673161A (en) * | 2009-10-15 | 2010-03-17 | 复旦大学 | Visual, operable and non-solid touch screen system |
CN102906623A (en) * | 2010-02-28 | 2013-01-30 | 奥斯特豪特集团有限公司 | Local advertising content on an interactive head-mounted eyepiece |
CN102799318A (en) * | 2012-08-13 | 2012-11-28 | 深圳先进技术研究院 | Human-machine interaction method and system based on binocular stereoscopic vision |
Also Published As
Publication number | Publication date |
---|---|
CN103713737A (en) | 2014-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103713737B (en) | Virtual keyboard system used for Google glasses | |
US11531402B1 (en) | Bimanual gestures for controlling virtual and graphical elements | |
US11954808B2 (en) | Rerendering a position of a hand to decrease a size of a hand to create a realistic virtual/augmented reality environment | |
US20220326781A1 (en) | Bimanual interactions between mapped hand regions for controlling virtual and graphical elements | |
US20220206588A1 (en) | Micro hand gestures for controlling virtual and graphical elements | |
US20200097071A1 (en) | Interaction Engine for Creating a Realistic Experience in Virtual Reality/Augmented Reality Environments | |
US10846930B2 (en) | Using passable world model for augmented or virtual reality | |
CN104838337B (en) | It is inputted for the no touch of user interface | |
CN106125921A (en) | Gaze detection in 3D map environment | |
CN103064514A (en) | Method for achieving space menu in immersive virtual reality system | |
CN104749777A (en) | Interaction method for wearable smart devices | |
Mihaľov et al. | Potential of low cost motion sensors compared to programming environments | |
TWI662439B (en) | Virtual space positioning method and apparatus | |
Varma et al. | Gestural interaction with three-dimensional interfaces; current research and recommendations | |
Rusnak | Unobtrusive Multi-User Interaction in Group Collaborative Environments | |
KR20200061700A (en) | System and method for providing virtual reality content capable of multi-contents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |