CN208722145U - A kind of intelligent glasses Focus tracking device and intelligent glasses - Google Patents

A kind of intelligent glasses Focus tracking device and intelligent glasses Download PDF

Info

Publication number
CN208722145U
CN208722145U CN201821572786.4U CN201821572786U CN208722145U CN 208722145 U CN208722145 U CN 208722145U CN 201821572786 U CN201821572786 U CN 201821572786U CN 208722145 U CN208722145 U CN 208722145U
Authority
CN
China
Prior art keywords
drop point
intelligent glasses
image
laser
laser drop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201821572786.4U
Other languages
Chinese (zh)
Inventor
蔡海蛟
冯歆鹏
周骥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Zhaoguan Electronic Technology Co Ltd
Shanghai Zhao Ming Electronic Technology Co Ltd
Original Assignee
Kunshan Zhaoguan Electronic Technology Co Ltd
Shanghai Zhao Ming Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Zhaoguan Electronic Technology Co Ltd, Shanghai Zhao Ming Electronic Technology Co Ltd filed Critical Kunshan Zhaoguan Electronic Technology Co Ltd
Priority to CN201821572786.4U priority Critical patent/CN208722145U/en
Application granted granted Critical
Publication of CN208722145U publication Critical patent/CN208722145U/en
Priority to KR1020207034439A priority patent/KR102242719B1/en
Priority to PCT/CN2019/107669 priority patent/WO2020063614A1/en
Priority to EP19199836.8A priority patent/EP3640840B1/en
Priority to JP2019175346A priority patent/JP6734602B2/en
Priority to US16/669,919 priority patent/US10860165B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The utility model provides a kind of intelligent glasses Focus tracking device and intelligent glasses, it is related to smart machine field, the device passes through the laser drop point in acquisition image and detection image, identify the object in image at laser drop point, and then the object at laser drop point is broadcasted to user, as broadcasting object existing for laser drop point site detected to user, rather than directly the object in image is all broadcasted, to improve the casting efficiency of intelligent glasses, the experience of user is improved.

Description

A kind of intelligent glasses Focus tracking device and intelligent glasses
Technical field
The utility model belongs to smart machine field, and in particular to a kind of intelligent glasses Focus tracking device and Brilliant Eyes Mirror.
Background technique
Currently, having a kind of intelligent glasses for user visually impaired design, which can be with for convenient for user visually impaired life Image information is acquired, after user wears the intelligent glasses, which can broadcast the content in present image to user, To provide convenience for the life of user visually impaired.
But the utility model inventors have found that user use intelligent glasses when, can not actively select the friendship of glasses Mutual center, the mode that intelligent glasses are usually taken at this time are that information in all images is once all broadcasted, and cannot One time accurately locked text, the object range for needing to identify, need to be directed toward by external physical equipment (such as finger) identified Point, similar to the equipment one external instruction intervened, equipment can just complete its function according to this instruction.
Therefore, current intelligent glasses casting efficiency is lower, and user can not experience the orientation of object in the visual field, user's body It tests bad.
Utility model content
In view of drawbacks described above in the prior art or deficiency, it is intended to provide a kind of intelligent glasses Focus tracking device and intelligence Glasses improve the experience of user to improve the casting efficiency of intelligent glasses.
According to the utility model in a first aspect, providing a kind of intelligent glasses Focus tracking device, comprising:
Acquisition unit, for acquiring the laser drop point in image and detection image;
Recognition unit, for identification object in image at laser drop point;
Unit is broadcasted, for broadcasting the object at the laser drop point to user.
Further, in the device:
The recognition unit can be also used in identification image the object around at laser drop point in setting regions;
The casting unit can be also used for broadcasting the object around at the laser drop point in setting regions to user.
Further, in the device:
The casting unit is also used to: when object is not present in setting regions at laser drop point or around it in image, User is prompted to rotate in head, or prompt user images at laser drop point or be not present in setting regions around it object.
Further, casting unit is used for when there is no objects in image at laser drop point or around it in setting regions When, determine the orientation where objects in images, and prompt the orientation rotation head where user to object.
Further, in the device:
The casting unit is also used to: when user rotates head, being set at laser drop point or around it according to object Determine the degree in region, changes voice prompt.
Preferably, it is arranged in the central area of image at the laser drop point.
Further, the acquisition unit can be also used for:
Infrared laser point by camera collection image, and in detection image;Or
By the first camera collection image, laser drop point site is detected by second camera, and according to presetting The first camera and second camera corresponding relationship, determine position of the laser drop point in acquired image.
Preferably, the acquisition unit is also used to before the laser drop point in acquisition image and detection image:
Determine that the movement velocity of the intelligent glasses is less than setting value.
Second aspect, the utility model embodiment also provide a kind of intelligent glasses, comprising:
For emitting the laser emitter of laser beam;
For acquiring the photographic device of the laser drop point in image and detection image;
The processor of object in image at laser drop point for identification;
For broadcasting the sound broadcasting device of the object at the laser drop point to user;
The processor connects the photographic device and the sound broadcasting device.
Further, the photographic device specifically includes:
First camera, for acquiring image;
Second camera, for detecting laser drop point;
The processor determines the second camera shooting according to the corresponding relationship of pre-set first camera and second camera Position of the laser drop point of head detection in the first camera acquired image.
Further, the acquired image of the photographic device is directed toward in the direction of the laser transmitter projects laser beam Central area.
Preferably, the laser emitter is specially infrared transmitter.
Further, the sound broadcasting device, specially earphone or loudspeaker.
Further, the intelligent glasses further include:
For judging the inertia sensing device assembly of intelligent glasses motion state, which connects the processing Device.
Further, the inertia sensing device assembly includes one of following or combination:
For determining the velocity sensor of intelligent glasses movement velocity;
For determining the acceleration transducer of intelligent glasses movement velocity;
For determining that intelligent glasses are vertically to the gyroscope of the angle information of the axis of orientation in the earth's core.
The utility model embodiment provides a kind of intelligent glasses Focus tracking device and intelligent glasses, the device pass through acquisition Laser drop point in image and detection image identifies the object in image at laser drop point, and then broadcasts laser drop point to user The object at place, as broadcasting object existing for laser drop point site detected to user, rather than directly by the object in image Body all broadcasts, to improve the casting efficiency of intelligent glasses, improves the experience of user.
It should be appreciated that the above description is merely an outline of the technical solution of the present invention, so as to more clearly understand this The technological means of utility model, so as to be implemented in accordance with the contents of the specification.In order to allow the above-mentioned of the utility model and its Its objects, features and advantages can be more clearly understood, and special lift illustrates specific embodiment of the present utility model below.
Detailed description of the invention
By reading the detailed description of following example embodiments, those of ordinary skill in the art are readily apparent that described herein A little with benefit and other advantage and benefit.Attached drawing is only used for showing the purpose of exemplary embodiment, and is not considered as Limitations of the present invention.And throughout the drawings, identical component is indicated by the same numeral.In the accompanying drawings:
Fig. 1 is intelligent glasses Focus tracking method flow diagram provided by the embodiment of the utility model;
Fig. 2 is laser drop point site and its surrounding setting regions schematic diagram in image provided by the embodiment of the utility model;
Fig. 3 is the intelligent glasses Focus tracking method flow in a specific embodiment provided by the embodiment of the utility model Figure;
Fig. 4 is intelligent glasses Focus tracking apparatus structure schematic diagram provided by the embodiment of the utility model;
Fig. 5 is intelligent glasses structural schematic diagram provided by the embodiment of the utility model;
Fig. 6 is the intelligent glasses structural schematic diagram in a specific embodiment provided by the embodiment of the utility model;
Fig. 7 is the intelligent glasses structural schematic diagram in another specific embodiment provided by the embodiment of the utility model.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
In the present invention, it should be appreciated that it is public that the terms such as " comprising " or " having " are intended to refer to institute in this specification The presence of the feature opened, number, step, behavior, component, part or combinations thereof, and be not intended to exclusion it is one or more other Feature, number, step, behavior, component, part or combinations thereof there are a possibility that.
It also should be noted that in the absence of conflict, in the embodiments of the present invention and embodiment Feature can be combined with each other.The utility model will be described in detail below with reference to the accompanying drawings and embodiments.
In fig. 1 it is shown that intelligent glasses Focus tracking method provided by the embodiment of the utility model, this method comprises:
Step S101, the laser drop point in image and detection image is acquired;
Step S102, the object in image at laser drop point is identified;
Step S103, the object at laser drop point is broadcasted to user.
By the intelligent glasses Focus tracking method, object existing for laser drop point site detected is broadcasted to user, Rather than directly all broadcast the object in image, to improve the casting efficiency of intelligent glasses, improve user Experience.
Object in step S102, can be arbitrary objects, be also possible to the specified object of predefined.
Since for smaller or farther away object, laser drop point may not accurately be fallen on object very much, this When, a region can be further set around laser drop point, object in this region is all directly identified and broadcast Report, to further increase the usage experience of user.
At this point, this method further include:
Identify in image the object around at laser drop point in setting regions;
The object around at laser drop point in setting regions is broadcasted to user.
Further, for user visually impaired, it may be difficult to by the good alignment target object of the focus of intelligent glasses, this When, when object can be not present at laser drop point or around it in setting regions in the picture, user is prompted to rotate head, or It prompts that object is not present in user images at laser drop point or around it in setting regions.
In general, user can be prompted to rotate when object is not present in setting regions at laser drop point or around it in image Head, can also broadcast to user at laser drop point or there is no the related voices of object to prompt around it in setting regions, at this time User can arbitrarily rotate head searching object according to voice prompting;Object can also be not identified when the entire picture in image When body, prompting user to rotate in head or user's casting image-region, there is no the related voices of object to prompt, at this time user Head searching object can be arbitrarily rotated according to voice prompting.
Further, when object is not present in setting regions at laser drop point or around it in image, image is determined Orientation where middle object, and prompt the orientation rotation head where user to object.For example, if the laser drop point in image is left Side can prompt have text or two dimensional code on the left of user's laser drop point, user is allowed to turn left there are text or two dimensional code Head.
When user rotate head when, can according to object at laser drop point or around it setting regions degree, become Change voice prompt, consequently facilitating user visually impaired determines the degree on rotation head, is conducive to user and more accurately carries out focus positioning. At this point, user, which can be, rotates head according to the orientation where the object of prompt, it is also possible to user and arbitrarily rotates head searching Object.
For example, when object is closer at laser drop point or when setting regions around it, voice prompt can be more anxious in image Promote, consequently facilitating user judges whether correct and head rotation the degree in head rotation direction is suitable.It can also be passed through His unique voice prompting reminds user, and whether object arrived at laser drop point or setting regions around it.
According to the use habit of most of user, it is arranged in the central area of image at laser drop point preferably, due to taking the photograph When as the angle of head and the relatively fixed angle of laser emitter, for closer object and farther away object, laser drop point exists Coordinate in image may be not exactly the same, as long as the central area for making laser drop point be in image as far as possible may make user With more preferably usage experience.
As shown in Fig. 2, at laser drop point can at the center of image, laser drop point around setting regions, then may be used It can basis specially to surround circle or rectangular area, those skilled in the art of the area accounting 1/3-1/2 of laser drop point Actual conditions at laser drop point and its initial setting up of the range of surrounding setting regions is adjusted, user can also according to from Oneself use habit adjusts at laser drop point and its range of surrounding setting regions.
When laser beam uses visible infrared-ray, macroscopic red dot can be presented at laser drop point, at this point, The position that red dot is identified directly in the image of acquisition, that is, can determine the laser drop point site in image, at this point, being taken the photograph by one As head can be completed step S101 acquisition image and detection image in infrared laser point, still, using visible infrared When ray, it may interfere with other people in environment, so, it is preferable using naked eyes invisible rays, at this point it is possible to pass through the One camera collection image detects laser drop point site by second camera, and according to preset first camera and The corresponding relationship of second camera determines position of the laser drop point in acquired image.
Preferably, before step S101, it may further determine that the movement velocity of intelligent glasses is less than setting value, thus It avoids acquiring image during user rotates head, interference is generated to user.
In a preferred embodiment, intelligent glasses Focus tracking method provided by the embodiment of the utility model, such as Fig. 3 institute Show, comprising:
Step S301, by the first camera collection image, and laser drop point site is detected by second camera;
Step S302, according to the corresponding relationship of preset first camera and second camera, laser drop point is determined Position in acquired image;
Step S303, judge with the presence or absence of object at laser drop point, if so, executing step S304, otherwise, execute step S306;
Step S304, the object in image at laser drop point is identified;
Step S305, the object at laser drop point is broadcasted to user;
Step S306, judge with the presence or absence of object in whole image region, if so, executing step S307, otherwise, execute Step S308;
Step S307, it determines the positional relationship of objects in images Yu laser drop point, and prompts the side where user to object Position rotation head;
Step S308, it prompts that object is not present in user images, or prompt user rotates head.
In step S301, in camera collection image, relatively steady state typically is in intelligent glasses In, if intelligent glasses generally do not acquire image at this time with the movement of higher speed, can be moved by intelligent glasses Speed, acceleration, to judge the head rotation situation of user.
In step S303, only can whether there is object at identification laser drop point, can also identify at laser drop point and It whether there is object around it in setting regions.
In step S308, prompt there is no after object in user images, user can arbitrarily rotate head according to prompt Searching object.
The prompt facility of step S307 and step S308 can also be determined to open or close, if user does not open by user Prompt is opened, then user etc. are less than casting, so that it may determine that, there is no object in image, user can be according to the random rotary head of prompt Portion's searching object.
By the intelligent glasses, when user wants to understand environmental information, specific orientation can be understood by head rotation Object information.For example, user visually impaired to a hall, wants to understand the layout in hall, when what having, head can be passed through It rotates to understand the object information of particular orientation.For specific informations such as text, two dimensional codes, if having text at laser drop point, Then directly broadcast, if text at the left side of laser drop point, prompts left to have a text, user can turn left head, when Text is then broadcasted in laser drop point.
Laser drop point site in image, can be according to user setting noting marker or not noting marker, for eyesight For the wearable device (such as VR) that normal user uses, cross mark can be filled in the focal position of the setting of image, As shown in Figure 2, to illustrate the point to be laser drop point, i.e., current image focal point judges and adjusted convenient for user, right It, then can not noting marker in user visually impaired.
Likewise, the setting regions around the laser drop point site of image, can also according to user setting noting marker or Noting marker can be by laser drop point site for the wearable device (such as VR) that twenty-twenty user uses by person The setting regions unique edge circle of surrounding goes out, such as red frame, to illustrate the region to be visual pattern focus area, just Judge and adjust in user, it, then can not noting marker for user visually impaired.
It, can be direct when the laser drop point site of image has characteristic information (such as the characteristic informations such as text, two dimensional code) Casting, the range of casting are diffused in 1/3-1/2 (ratio is adjustable) image area region around with the position.
The utility model embodiment correspondingly provides a kind of intelligent glasses Focus tracking device, as shown in Figure 4, comprising:
Acquisition unit 401, for acquiring the laser drop point in image and detection image;
Recognition unit 402, for identification object in image at laser drop point;
Unit 403 is broadcasted, for broadcasting the object at laser drop point to user.
Further, recognition unit 402 is also used to:
Identify in image the object around at laser drop point in setting regions;
Casting unit 403 is also used to:
The object around at laser drop point in setting regions is broadcasted to user.
Further, casting unit 403 is also used to: when there is no objects in image at laser drop point or around it in setting regions When body, user is prompted to rotate head.
Further, casting unit 403 is when being not present object in setting regions at laser drop point or around it in image, It prompts user to rotate head, specifically includes:
When object is not present in setting regions at laser drop point or around it in image, where determining objects in images Orientation, and prompt the orientation rotation head where user to object.
Further, casting unit 403 is also used to:
When user rotates head according to the orientation where the object of prompt, according to object at laser drop point or its week The degree of setting regions is enclosed, voice prompt is changed.
Preferably, it is arranged in the central area of image at laser drop point.
Further, acquisition unit 401 is specifically used for:
Infrared laser point by camera collection image, and in detection image;Or
By the first camera collection image, laser drop point site is detected by second camera, and according to presetting The first camera and second camera corresponding relationship, determine position of the laser drop point in acquired image.
Preferably, acquisition unit 401 is also used to:
Before the laser drop point in acquisition image and detection image, determine that the movement velocity of intelligent glasses is less than setting Value.
The utility model embodiment correspondingly provides a kind of intelligent glasses, as shown in Figure 5, comprising:
For emitting the laser emitter 501 of laser beam;
For acquiring the photographic device 502 of the laser drop point in image and detection image;
The processor 503 of object in image at laser drop point for identification;
For broadcasting the sound broadcasting device 504 of the object at laser drop point to user;
Processor 503 connects photographic device 502 and sound broadcasting device 504.
The laser drop point detected of photographic device 502 is the drop point for the laser that laser emitter 501 is emitted.
Further, as shown in fig. 6, photographic device 502 specifically includes:
First camera 5021, for acquiring image;
Second camera 5022, for detecting laser drop point;
Processor 503 is determined according to the corresponding relationship of pre-set first camera 5021 and second camera 5022 Position of the laser drop point that second camera 5022 detects in 5021 acquired image of the first camera.
Further, laser emitter 501 emits the direction of laser beam, is directed toward in the acquired image of photographic device 502 Heart district domain.
Further, laser emitter 501 is specially infrared transmitter.
Further, sound broadcasting device 504, specially earphone or loudspeaker.
Further, as shown in fig. 7, the intelligent glasses further include:
For judging the inertia sensing device assembly 505 of intelligent glasses motion state, the inertia sensing device assembly 505 connection Processor 503.
Further, inertia sensing device assembly 505 may include one of following or combination:
For determining the velocity sensor of intelligent glasses movement velocity;
For determining the acceleration transducer of intelligent glasses movement velocity;
For determining that intelligent glasses are vertically to the gyroscope of the angle information of the axis of orientation in the earth's core.
It is situated between by intelligent glasses Focus tracking method, apparatus provided by the embodiment of the utility model and intelligent glasses, storage Matter can broadcast the object in the visual field at laser drop point to user, avoid and all broadcast the object in the visual field, and energy The orientation for further prompting user's object, is convenient for usertracking object, improves the usage experience of user.
Flow chart and block diagram in attached drawing, illustrating can according to the method, apparatus and calculating machine of the various embodiments of the disclosure Read the architecture, function and operation in the cards of storage medium.It should be noted that represented by each box in flow chart Step may not can be basically executed in parallel sometimes according to sequentially carrying out shown in label, sometimes can also be in the opposite order It executes, this depends on the function involved.It is also noted that each box and block diagram in block diagram and or flow chart And/or the combination of the box in flow chart, it can be realized with the hardware for executing defined functions or operations, or can be with firmly Combination that part is instructed with calculating machine is realized.
Being described in the embodiment of the present disclosure involved unit or module can be realized by way of software, can also be with It is realized by way of hardware.
By above to the description of embodiment, those skilled in the art can be understood that each embodiment can be by Software adds the mode of required general hardware platform to realize, naturally it is also possible to pass through hardware.Based on this understanding, above-mentioned skill Substantially the part that contributes to existing technology can be embodied in the form of software products art scheme in other words, the operation Machine software product can store in calculating machine readable storage medium storing program for executing, such as ROM/RAM, magnetic disk, CD, including some instructions are used So that operation machine equipment (can be personal calculating machine, server or the network equipment etc.) execute each embodiment or Method described in certain parts of person's embodiment.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the utility model, rather than its limitations; Although the utility model is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or part of technical characteristic is carried out etc. With replacement;And these are modified or replaceed, various embodiments of the utility model technology that it does not separate the essence of the corresponding technical solution The spirit and scope of scheme.

Claims (15)

1. a kind of intelligent glasses Focus tracking device characterized by comprising
Acquisition unit, for acquiring the laser drop point in image and detection image;
Recognition unit, for identification object in image at laser drop point;
Unit is broadcasted, for broadcasting the object at the laser drop point to user.
2. intelligent glasses Focus tracking device as described in claim 1, it is characterised in that:
The recognition unit is also used to identify in image the object around at laser drop point in setting regions;
The casting unit is also used to broadcast the object around at the laser drop point in setting regions to user.
3. intelligent glasses Focus tracking device as claimed in claim 2, it is characterised in that:
The casting unit is also used to: when object is not present in setting regions at laser drop point or around it in image, being prompted User rotates in head, or prompt user images at laser drop point or is not present in setting regions around it object.
4. intelligent glasses Focus tracking device as claimed in claim 3, which is characterized in that the casting unit is used for:
When object is not present in setting regions at laser drop point or around it in image, the side where objects in images is determined Position, and prompt the orientation rotation head where user to object.
5. intelligent glasses Focus tracking device as claimed in claim 3, it is characterised in that:
The casting unit is also used to: when user rotates head, setting area at laser drop point or around it according to object The degree in domain changes voice prompt.
6. intelligent glasses Focus tracking device as described in claim 1, which is characterized in that setting is being schemed at the laser drop point In the central area of picture.
7. intelligent glasses Focus tracking device as described in claim 1, which is characterized in that the acquisition unit is also used to:
Infrared laser point by camera collection image, and in detection image;Or
By the first camera collection image, laser drop point site is detected by second camera, and according to preset the The corresponding relationship of one camera and second camera determines position of the laser drop point in acquired image.
8. intelligent glasses Focus tracking device as described in claim 1, which is characterized in that the acquisition unit is in acquisition image And before the laser drop point in detection image, it is also used to:
Determine that the movement velocity of the intelligent glasses is less than setting value.
9. a kind of intelligent glasses characterized by comprising
For emitting the laser emitter of laser beam;
For acquiring the photographic device of the laser drop point in image and detection image;
The processor of object in image at laser drop point for identification;
For broadcasting the sound broadcasting device of the object at the laser drop point to user;
The processor connects the photographic device and the sound broadcasting device.
10. intelligent glasses as claimed in claim 9, which is characterized in that the photographic device specifically includes:
First camera, for acquiring image;
Second camera, for detecting laser drop point;
The processor determines that second camera is examined according to the corresponding relationship of pre-set first camera and second camera Position of the laser drop point of survey in the first camera acquired image.
11. intelligent glasses as claimed in claim 9, which is characterized in that the direction of the laser transmitter projects laser beam, It is directed toward the central area of the acquired image of the photographic device.
12. intelligent glasses as claimed in claim 9, which is characterized in that the laser emitter is specially infrared transmitter.
13. intelligent glasses as claimed in claim 9, which is characterized in that the sound broadcasting device, specially earphone or loudspeaking Device.
14. intelligent glasses as claimed in claim 9, which is characterized in that further include:
For judging the inertia sensing device assembly of intelligent glasses motion state, which connects the processor.
15. intelligent glasses as claimed in claim 14, which is characterized in that the inertia sensing device assembly include it is one of following or Combination:
For determining the velocity sensor of intelligent glasses movement velocity;
For determining the acceleration transducer of intelligent glasses movement velocity;
For determining that intelligent glasses are vertically to the gyroscope of the angle information of the axis of orientation in the earth's core.
CN201821572786.4U 2018-09-26 2018-09-26 A kind of intelligent glasses Focus tracking device and intelligent glasses Active CN208722145U (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201821572786.4U CN208722145U (en) 2018-09-26 2018-09-26 A kind of intelligent glasses Focus tracking device and intelligent glasses
KR1020207034439A KR102242719B1 (en) 2018-09-26 2019-09-25 Smart glasses tracking method and device, and smart glasses and storage media
PCT/CN2019/107669 WO2020063614A1 (en) 2018-09-26 2019-09-25 Smart glasses tracking method and apparatus, and smart glasses and storage medium
EP19199836.8A EP3640840B1 (en) 2018-09-26 2019-09-26 Tracking method and apparatus for smart glasses, smart glasses and storage medium
JP2019175346A JP6734602B2 (en) 2018-09-26 2019-09-26 Tracking method and tracking device for smart glasses, smart glasses, and storage media
US16/669,919 US10860165B2 (en) 2018-09-26 2019-10-31 Tracking method and apparatus for smart glasses, smart glasses and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201821572786.4U CN208722145U (en) 2018-09-26 2018-09-26 A kind of intelligent glasses Focus tracking device and intelligent glasses

Publications (1)

Publication Number Publication Date
CN208722145U true CN208722145U (en) 2019-04-09

Family

ID=65982855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201821572786.4U Active CN208722145U (en) 2018-09-26 2018-09-26 A kind of intelligent glasses Focus tracking device and intelligent glasses

Country Status (1)

Country Link
CN (1) CN208722145U (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020063614A1 (en) * 2018-09-26 2020-04-02 上海肇观电子科技有限公司 Smart glasses tracking method and apparatus, and smart glasses and storage medium
CN111176439A (en) * 2019-11-19 2020-05-19 广东小天才科技有限公司 Reading control method based on visual tracking, intelligent glasses and system
US10860165B2 (en) 2018-09-26 2020-12-08 NextVPU (Shanghai) Co., Ltd. Tracking method and apparatus for smart glasses, smart glasses and storage medium
CN115268811A (en) * 2022-06-24 2022-11-01 安徽宝信信息科技有限公司 Interactive display device for screen

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020063614A1 (en) * 2018-09-26 2020-04-02 上海肇观电子科技有限公司 Smart glasses tracking method and apparatus, and smart glasses and storage medium
US10860165B2 (en) 2018-09-26 2020-12-08 NextVPU (Shanghai) Co., Ltd. Tracking method and apparatus for smart glasses, smart glasses and storage medium
CN111176439A (en) * 2019-11-19 2020-05-19 广东小天才科技有限公司 Reading control method based on visual tracking, intelligent glasses and system
CN115268811A (en) * 2022-06-24 2022-11-01 安徽宝信信息科技有限公司 Interactive display device for screen
CN115268811B (en) * 2022-06-24 2023-01-31 安徽宝信信息科技有限公司 Interactive display device for screen

Similar Documents

Publication Publication Date Title
CN208722145U (en) A kind of intelligent glasses Focus tracking device and intelligent glasses
CN208689267U (en) A kind of intelligent glasses Focus tracking device and intelligent glasses
US10395116B2 (en) Dynamically created and updated indoor positioning map
JP5024067B2 (en) Face authentication system, method and program
CN104094590B (en) Method and apparatus for unattended image capture
Schoop et al. Hindsight: enhancing spatial awareness by sonifying detected objects in real-time 360-degree video
US20050208457A1 (en) Digital object recognition audio-assistant for the visually impaired
Manduchi et al. The last meter: blind visual guidance to a target
US20220309836A1 (en) Ai-based face recognition method and apparatus, device, and medium
JP2013509654A (en) Sensor-based mobile search, related methods and systems
US10810541B2 (en) Methods for pick and put location verification
CN104170368B (en) Method and apparatus about picture material
CN109478227A (en) Calculate the iris in equipment or the identification of other physical feelings
US11397320B2 (en) Information processing apparatus, information processing system, and non-transitory computer readable medium
CN105975550A (en) Examination question search method and device of intelligent device
JP6734602B2 (en) Tracking method and tracking device for smart glasses, smart glasses, and storage media
CN111598065A (en) Depth image acquisition method, living body identification method, apparatus, circuit, and medium
CN110334736A (en) Image-recognizing method, device, electronic equipment and medium
CN109002796A (en) A kind of image-pickup method, device and system and electronic equipment
CN109543563B (en) Safety prompting method and device, storage medium and electronic equipment
KR20160127424A (en) System and Method for Recognizing Blocks using Optical Structure
KR20210108302A (en) Systems and Methods for Pairing Devices Using Visual Recognition
US10860165B2 (en) Tracking method and apparatus for smart glasses, smart glasses and storage medium
CN110955043B (en) Intelligent glasses focus tracking method and device, intelligent glasses and storage medium
CN111563514B (en) Three-dimensional character display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant