CN110955325A - Intelligent glasses focus tracking method and device, intelligent glasses and storage medium - Google Patents

Intelligent glasses focus tracking method and device, intelligent glasses and storage medium Download PDF

Info

Publication number
CN110955325A
CN110955325A CN201811124723.7A CN201811124723A CN110955325A CN 110955325 A CN110955325 A CN 110955325A CN 201811124723 A CN201811124723 A CN 201811124723A CN 110955325 A CN110955325 A CN 110955325A
Authority
CN
China
Prior art keywords
laser
image
point
camera
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811124723.7A
Other languages
Chinese (zh)
Inventor
蔡海蛟
冯歆鹏
周骥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Zhaoguan Electronic Technology Co ltd
NextVPU Shanghai Co Ltd
Original Assignee
Kunshan Zhaoguan Electronic Technology Co ltd
NextVPU Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Zhaoguan Electronic Technology Co ltd, NextVPU Shanghai Co Ltd filed Critical Kunshan Zhaoguan Electronic Technology Co ltd
Priority to CN201811124723.7A priority Critical patent/CN110955325A/en
Priority to PCT/CN2019/107669 priority patent/WO2020063614A1/en
Priority to KR1020207034439A priority patent/KR102242719B1/en
Priority to JP2019175346A priority patent/JP6734602B2/en
Priority to EP19199836.8A priority patent/EP3640840B1/en
Priority to US16/669,919 priority patent/US10860165B2/en
Publication of CN110955325A publication Critical patent/CN110955325A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/001Teaching or communicating with blind persons
    • G09B21/006Teaching or communicating with blind persons using audible presentation of the information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a method and a device for tracking a focus of intelligent glasses, the intelligent glasses and a storage medium, and relates to the field of intelligent equipment.

Description

Intelligent glasses focus tracking method and device, intelligent glasses and storage medium
Technical Field
The invention belongs to the field of intelligent equipment, and particularly relates to a method and a device for tracking a focus of intelligent glasses, the intelligent glasses and a storage medium.
Background
At present, for the life of the visually impaired user, there is intelligent glasses designed for the visually impaired user, the intelligent glasses can collect image information, and after the user wears the intelligent glasses, the intelligent glasses can broadcast the content in the current image to the user, so that convenience is provided for the life of the visually impaired user.
However, the inventor of the present invention finds that, when a user uses smart glasses, the user cannot actively select an interaction center of the glasses, and at this time, the smart glasses usually adopt a mode that information in all images is completely broadcasted at one time, but a text and an object range to be recognized cannot be accurately locked at the first time, and the device can complete its function according to an instruction that an external physical device (such as a finger, etc.) points to a recognized point, which is similar to an instruction for external intervention to the device.
Consequently, present intelligent glasses report efficiency lower, and the user can't experience the position of object in the field of vision, and user experience is not good.
Disclosure of Invention
In view of the above defects or shortcomings in the prior art, it is desirable to provide a method and a device for tracking a focus of smart glasses, and a storage medium, so as to improve broadcasting efficiency of the smart glasses and improve user experience.
According to a first aspect of the present invention, there is provided a smart glasses focus tracking method, comprising:
collecting an image and detecting a laser drop point in the image;
identifying an object at a laser landing point in the image;
and broadcasting the object at the laser landing point to a user.
Further, the method also includes:
identifying objects in a set area around a laser falling point in the image;
and broadcasting the objects in the set area around the laser drop point to a user.
Still further, the method further comprises:
when no object exists at the laser falling point or in the surrounding set area in the image, prompting the user to rotate the head, or prompting the user to determine that no object exists at the laser falling point or in the surrounding set area in the image.
Further, when there is no object in the laser falling point or the set area around the laser falling point in the image, the method prompts the user to rotate the head, and specifically includes:
and when no object exists at the laser falling point or in the set area around the laser falling point in the image, determining the position of the object in the image, and prompting the user to rotate the head to the position of the object.
Still further, the method further comprises:
when the user rotates the head, the prompt sound is changed according to the degree of the object approaching the laser landing point or the set area around the laser landing point.
Preferably, the laser landing point is disposed in a central region of the image.
Further, the acquiring an image and detecting a laser drop point in the image specifically includes:
collecting an image through a camera, and detecting an infrared laser spot in the image; or
The method comprises the steps of collecting an image through a first camera, detecting the position of a laser drop point through a second camera, and determining the position of the laser drop point in the collected image according to the preset corresponding relation between the first camera and the second camera.
Preferably, before the acquiring the image and detecting the laser drop point in the image, the method further includes:
and determining that the movement speed of the intelligent glasses is less than a set value.
In a second aspect, an embodiment of the present invention further provides an intelligent glasses focus tracking apparatus, including:
the acquisition unit is used for acquiring an image and detecting a laser drop point in the image;
the identification unit is used for identifying an object at a laser falling point in the image;
and the broadcasting unit is used for broadcasting the object at the laser falling point to a user.
In a third aspect, an embodiment of the present invention further provides a pair of smart glasses, including:
a laser emitter for emitting laser rays;
the camera device is used for collecting images and detecting laser falling points in the images;
a processor for identifying an object in the image at which the laser falls;
the voice broadcasting device is used for broadcasting the object at the laser point falling point to a user;
the processor is connected with the camera device and the voice broadcasting device.
Further, the imaging apparatus specifically includes:
the first camera is used for collecting images;
the second camera is used for detecting a laser drop point;
and the processor determines the position of the laser falling point detected by the second camera in the image acquired by the first camera according to the preset corresponding relation between the first camera and the second camera.
Further, the direction of the laser ray emitted by the laser emitter points to the central area of the image acquired by the camera device.
Preferably, the laser emitter is specifically an infrared emitter.
Further, the voice broadcast device is specifically an earphone or a loudspeaker.
Further, this smart glasses still includes:
and the inertial sensor component is used for judging the motion state of the intelligent glasses and is connected with the processor.
Still further, the inertial sensor assembly includes one or a combination of:
a speed sensor for determining the movement speed of the smart glasses;
an acceleration sensor for determining the movement speed of the intelligent glasses;
and the gyroscope is used for determining the included angle information of the direction axis of the intelligent glasses, which is vertical to the geocentric.
In a fourth aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, the computer program being configured to implement the method according to the first aspect.
The embodiment of the invention provides a method and a device for tracking a focus of intelligent glasses, the intelligent glasses and a storage medium.
It should be understood that the above description is only an overview of the technical solutions of the present invention, so as to clearly understand the technical means of the present invention, and thus can be implemented according to the content of the description. In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, embodiments of the present invention are described below.
Drawings
The advantages and benefits described herein, as well as other advantages and benefits, will be apparent to those of ordinary skill in the art upon reading the following detailed description of the exemplary embodiments. The drawings are only for purposes of illustrating exemplary embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like elements throughout. In the drawings:
fig. 1 is a flowchart of a method for tracking a focus of smart glasses according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a laser spot position and a set area around the laser spot position in an image according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for tracking a focus of smart glasses according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an intelligent glasses focus tracking device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of smart glasses according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of smart glasses in an embodiment according to the present invention;
fig. 7 is a schematic structural diagram of smart glasses in another embodiment according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the present invention, it is to be understood that terms such as "including" or "having," or the like, are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility of the presence of one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In fig. 1, a method for tracking a focus of smart glasses according to an embodiment of the present invention is shown, where the method includes:
s101, collecting an image and detecting a laser drop point in the image;
s102, identifying an object at a laser falling point in the image;
and step S103, broadcasting the object at the laser landing point to a user.
Through the intelligent glasses focus tracking method, objects existing in the detected laser drop point position are broadcasted to a user, and the objects in the image are not directly broadcasted, so that the broadcasting efficiency of the intelligent glasses is improved, and the user experience is improved.
The object in step S102 may be any object or a specified object defined in advance.
Because for a small or far object, the laser drop point may not accurately drop on the object, at this time, an area may be further set around the laser drop point, and the objects in the area are directly identified and broadcasted, thereby further improving the user experience.
At this time, the method further includes:
identifying objects in a set area around a laser falling point in the image;
and broadcasting objects in the set area around the laser drop point to the user.
Further, it may be difficult for the visually impaired to align the focus of the smart glasses to the target object well, and at this time, the user may be prompted to turn the head when there is no object at the laser drop point or in the set area around the laser drop point in the image, or to be prompted that there is no object at the laser drop point or in the set area around the laser drop point in the image.
Generally, when no object exists at the laser falling point or in the peripheral set area in the image, the user can be prompted to rotate the head, and related voice prompts that no object exists at the laser falling point or in the peripheral set area can be broadcasted to the user, and at the moment, the user can freely rotate the head to search for the object according to the voice prompts; the user can also be prompted to rotate the head when the object is not recognized in the whole picture in the image, or the user broadcasts the related voice prompt that the object does not exist in the image area, and at the moment, the user can rotate the head freely to search for the object according to the voice prompt.
Furthermore, when no object exists at the laser falling point or in the set area around the laser falling point in the image, the position of the object in the image is determined, and the user is prompted to rotate the head to the position of the object. For example, if there is a text or a two-dimensional code on the left side of the laser drop point in the image, it may be prompted that there is a text or a two-dimensional code on the left side of the laser drop point for the user to turn the head to the left.
When the user rotates the head, the prompting sound can be changed according to the degree that the object is close to the laser falling point or the set area around the laser falling point, so that the visual impairment user can conveniently determine the degree of rotating the head, and the user can more accurately position the focus. At this time, the user may rotate the head according to the prompted orientation of the object, or the user may rotate the head at will to find the object.
For example, the closer the object is to the laser landing point or a set area around the laser landing point in the image, the more rapid the prompting sound may be, thereby facilitating the user to judge whether the head rotation direction is correct and the degree of head rotation is appropriate. Other unique voice prompts can be used for reminding the user whether the object reaches the laser drop point or the set area around the laser drop point.
According to the use habits of most users, the laser falling point is preferably arranged in the central area of the image, and as the angle of the camera and the angle of the laser emitter are relatively fixed, the coordinates of the laser falling point in the image may not be completely the same for a nearer object and a farther object, and the user can have better use experience as long as the laser falling point is positioned in the central area of the image as far as possible.
As shown in fig. 2, the laser drop point may be located at the center of the image, and the area around the laser drop point may be specifically a circular or rectangular area with an area ratio 1/3-1/2 around the laser drop point, and those skilled in the art may adjust the initial setting of the range of the laser drop point and the area around the laser drop point according to actual situations, and the user may also adjust the range of the laser drop point and the area around the laser drop point according to their own usage habits.
When the laser ray adopts visible infrared ray, a macroscopic red spot can be presented at the laser falling point, at the moment, the position of the red spot is directly identified in the collected image, namely the laser falling point position in the image can be determined, at the moment, the collected image of the step S101 and the infrared laser spot in the detected image can be completed through one camera, but when the visible infrared ray is used, other people in the environment can be disturbed, so the ray which is invisible to naked eyes is better, at the moment, the image can be collected through the first camera, the laser falling point position is detected through the second camera, and the position of the laser falling point in the collected image is determined according to the preset corresponding relation between the first camera and the second camera.
Preferably, before step S101, it may be further determined that the movement speed of the smart glasses is less than a set value, so as to avoid interference to the user caused by capturing images during the process of rotating the head of the user.
In a preferred embodiment, the intelligent glasses focus tracking method provided in the embodiment of the present invention, as shown in fig. 3, includes:
s301, acquiring an image through a first camera, and detecting a laser drop point position through a second camera;
step S302, determining the position of a laser drop point in the acquired image according to the preset corresponding relation between the first camera and the second camera;
step S303, judging whether an object exists at the laser landing point, if so, executing step S304, otherwise, executing step S306;
s304, identifying an object at a laser landing point in the image;
step S305, broadcasting the object at the laser landing point to a user;
step S306, judging whether an object exists in the whole image area, if so, executing step S307, otherwise, executing step S308;
s307, determining the position relation between the object in the image and the laser drop point, and prompting a user to rotate the head to the position of the object;
and step S308, prompting the user that no object exists in the image or prompting the user to rotate the head.
In step S301, when the camera captures an image, the smart glasses are generally in a relatively stable state, and if the smart glasses are moving at a higher speed, the image is generally not captured, and the rotation of the head of the user can be determined by the speed and acceleration of the movement of the smart glasses.
In step S303, only whether or not an object is present at the laser light falling point may be identified, and whether or not an object is present at the laser light falling point and in the set area around the laser light falling point may be identified.
In step S308, after the user is prompted that no object exists in the image, the user can optionally rotate the head to search for the object according to the prompt.
The prompt function of step S307 and step S308 may also be turned on or off by the user, and if the user does not turn on the prompt, the user may not wait for the broadcast, and may determine that there is no object in the image, and the user may turn the head at will to find the object according to the prompt.
Through the intelligent glasses, when a user wants to know the environmental information, the user can know the object information of a specific direction through head rotation. For example, when a visually impaired user arrives at a hall and wants to know the layout of the hall, the user can know the object information of a specific orientation by head rotation when there is something. To special information such as characters, two-dimensional code, if there is the characters laser placement point, then directly report, if the characters when the left side of laser placement point, then the suggestion left has the characters, and the user can be to the head of turning left, then reports when the characters are at the laser placement point.
The laser drop point position in the image can be marked or not marked according to the user setting, for the wearable device (such as VR) used by the user with normal vision, a cross mark can be marked at the set focus position of the image as shown in fig. 2, so that the point is indicated as the laser drop point, namely the current image focus, the judgment and adjustment of the user are facilitated, and for the user with visual impairment, the mark can not be marked.
Similarly, the set area around the laser drop point position of the image may be marked or unmarked according to the setting of the user, and for the wearable device (such as VR) used by the user with normal eyesight, the set area around the laser drop point position may be framed with a special frame, for example, a red frame, so as to indicate that the area is a visual image focus area, which is convenient for the user to determine and adjust, and for the user with visual impairment, the mark may be unmarked.
When the laser landing position of the image has the characteristic information (such as characters, two-dimensional codes and other characteristic information), the image can be directly broadcasted, and the broadcast range is diffused to the area of 1/3-1/2 (the proportion is adjustable) image area from the position to the periphery.
An embodiment of the present invention further provides an intelligent glasses focus tracking apparatus, as shown in fig. 4, including:
the acquisition unit 401 is used for acquiring an image and detecting a laser drop point in the image;
an identifying unit 402 for identifying an object at a laser landing point in the image;
and the broadcasting unit 403 is used for broadcasting the object at the laser landing point to the user.
Further, the identifying unit 402 is further configured to:
identifying objects in a set area around a laser falling point in the image;
broadcast unit 403 is also used to:
and broadcasting objects in the set area around the laser drop point to the user.
Further, the broadcasting unit 403 is also configured to: when no object exists at the laser falling point or in the set area around the laser falling point in the image, the user is prompted to rotate the head.
Further, report unit 403 when laser falling point in the image or when there is not the object in its surrounding set region, the suggestion user rotates the head, specifically includes:
and when no object exists at the laser falling point or in the set area around the laser falling point in the image, determining the position of the object in the image, and prompting the user to rotate the head to the position of the object.
Further, the broadcasting unit 403 is also configured to:
when the user rotates the head according to the prompted orientation of the object, the prompting sound is changed according to the degree of the object approaching the laser falling point or the set area around the laser falling point.
Preferably, the laser landing point is disposed in a central region of the image.
Further, the acquisition unit 401 is specifically configured to:
collecting an image through a camera, and detecting an infrared laser spot in the image; or
The method comprises the steps of collecting an image through a first camera, detecting the position of a laser drop point through a second camera, and determining the position of the laser drop point in the collected image according to the preset corresponding relation between the first camera and the second camera.
Preferably, the acquisition unit 401 is further configured to:
before the image is collected and the laser falling point in the image is detected, the movement speed of the intelligent glasses is determined to be smaller than a set value.
An embodiment of the present invention further provides a pair of smart glasses, as shown in fig. 5, including:
a laser transmitter 501 for transmitting laser rays;
a camera 502 for collecting images and detecting laser drop points in the images;
a processor 503 for identifying an object in the image at which the laser falls;
a voice broadcasting device 504 for broadcasting the object at the laser point location to the user;
the processor 503 is connected to the image pickup device 502 and the voice broadcast device 504.
The laser landing point detected by the camera 502 is the landing point of the laser emitted by the laser emitter 501.
Further, as shown in fig. 6, the image pickup device 502 specifically includes:
the first camera 5021 is used for collecting images;
the second camera 5022 is used for detecting a laser drop point;
the processor 503 determines the position of the laser landing point detected by the second camera 5022 in the image acquired by the first camera 5021 according to the preset corresponding relationship between the first camera 5021 and the second camera 5022.
Further, the laser emitter 501 emits a laser beam in a direction toward the center area of the image captured by the camera 502.
Further, the laser emitter 501 is specifically an infrared emitter.
Further, the voice broadcasting device 504 is specifically an earphone or a speaker.
Further, as shown in fig. 7, the smart glasses further include:
an inertial sensor component 505 for determining the motion state of the smart glasses, the inertial sensor component 505 being connected to the processor 503.
Further, the inertial sensor assembly 505 may include one or a combination of:
a speed sensor for determining the movement speed of the smart glasses;
an acceleration sensor for determining the movement speed of the intelligent glasses;
and the gyroscope is used for determining the included angle information of the direction axis of the intelligent glasses, which is vertical to the geocentric.
By the intelligent glasses focus tracking method and device, the intelligent glasses and the storage medium, objects at laser falling points in the visual field can be broadcasted to the user, the objects in the visual field are prevented from being broadcasted completely, the direction of the objects of the user can be further prompted, the user can track the objects conveniently, and the use experience of the user is improved.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods, apparatus, and computer-readable storage media according to various embodiments of the present disclosure. It should be noted that the steps represented by each block in the flow chart are not necessarily performed in the order shown by the reference numerals, and may sometimes be performed substantially in parallel, or may sometimes be performed in the reverse order, depending on the functions involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware for performing the specified functions or acts, or combinations of hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on the understanding, the above technical solutions may be essentially or partially implemented in the form of software products, which may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (17)

1. A method for intelligent glasses focus tracking, comprising:
collecting an image and detecting a laser drop point in the image;
identifying an object at a laser landing point in the image;
and broadcasting the object at the laser landing point to a user.
2. The method of claim 1, further comprising:
identifying objects in a set area around a laser falling point in the image;
and broadcasting the objects in the set area around the laser drop point to a user.
3. The method of claim 2, further comprising:
when no object exists at the laser falling point or in the surrounding set area in the image, prompting the user to rotate the head, or prompting the user to determine that no object exists at the laser falling point or in the surrounding set area in the image.
4. The method according to claim 3, wherein when no object is present at the laser spot or in the set area around the laser spot in the image, the method prompts the user to rotate the head, and specifically comprises:
and when no object exists at the laser falling point or in the set area around the laser falling point in the image, determining the position of the object in the image, and prompting the user to rotate the head to the position of the object.
5. The method of claim 3, further comprising:
when the user rotates the head, the prompt sound is changed according to the degree of the object approaching the laser landing point or the set area around the laser landing point.
6. The method of claim 1, wherein the laser landing point is disposed in a central region of the image.
7. The method of claim 1, wherein the capturing the image and detecting the laser landing point in the image comprises:
collecting an image through a camera, and detecting an infrared laser spot in the image; or
The method comprises the steps of collecting an image through a first camera, detecting the position of a laser drop point through a second camera, and determining the position of the laser drop point in the collected image according to the preset corresponding relation between the first camera and the second camera.
8. The method of claim 1, wherein prior to acquiring the image and detecting the laser landing point in the image, further comprising:
and determining that the movement speed of the intelligent glasses is less than a set value.
9. An intelligent eyeglass focus tracking apparatus, comprising:
the acquisition unit is used for acquiring an image and detecting a laser drop point in the image;
the identification unit is used for identifying an object at a laser falling point in the image;
and the broadcasting unit is used for broadcasting the object at the laser falling point to a user.
10. A smart eyewear, comprising:
a laser emitter for emitting laser rays;
the camera device is used for collecting images and detecting laser falling points in the images;
a processor for identifying an object in the image at which the laser falls;
the voice broadcasting device is used for broadcasting the object at the laser point falling point to a user;
the processor is connected with the camera device and the voice broadcasting device.
11. The smart eyewear of claim 10, wherein the camera device specifically comprises:
the first camera is used for collecting images;
the second camera is used for detecting a laser drop point;
and the processor determines the position of the laser falling point detected by the second camera in the image acquired by the first camera according to the preset corresponding relation between the first camera and the second camera.
12. The smart eyewear of claim 10, wherein the laser emitter emits a laser beam in a direction toward a central region of the image captured by the camera.
13. The smart eyewear of claim 10, wherein the laser emitter is in particular an infrared emitter.
14. The smart glasses according to claim 10, wherein the voice broadcasting device is an earphone or a speaker.
15. The smart eyewear of claim 10, further comprising:
and the inertial sensor component is used for judging the motion state of the intelligent glasses and is connected with the processor.
16. The smart eyewear of claim 15, wherein the inertial sensor assembly comprises one or a combination of:
a speed sensor for determining the movement speed of the smart glasses;
an acceleration sensor for determining the movement speed of the intelligent glasses;
and the gyroscope is used for determining the included angle information of the direction axis of the intelligent glasses, which is vertical to the geocentric.
17. A computer-readable storage medium, on which a computer program is stored, the computer program being for implementing the method as claimed in claims 1-8.
CN201811124723.7A 2018-09-26 2018-09-26 Intelligent glasses focus tracking method and device, intelligent glasses and storage medium Pending CN110955325A (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201811124723.7A CN110955325A (en) 2018-09-26 2018-09-26 Intelligent glasses focus tracking method and device, intelligent glasses and storage medium
PCT/CN2019/107669 WO2020063614A1 (en) 2018-09-26 2019-09-25 Smart glasses tracking method and apparatus, and smart glasses and storage medium
KR1020207034439A KR102242719B1 (en) 2018-09-26 2019-09-25 Smart glasses tracking method and device, and smart glasses and storage media
JP2019175346A JP6734602B2 (en) 2018-09-26 2019-09-26 Tracking method and tracking device for smart glasses, smart glasses, and storage media
EP19199836.8A EP3640840B1 (en) 2018-09-26 2019-09-26 Tracking method and apparatus for smart glasses, smart glasses and storage medium
US16/669,919 US10860165B2 (en) 2018-09-26 2019-10-31 Tracking method and apparatus for smart glasses, smart glasses and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811124723.7A CN110955325A (en) 2018-09-26 2018-09-26 Intelligent glasses focus tracking method and device, intelligent glasses and storage medium

Publications (1)

Publication Number Publication Date
CN110955325A true CN110955325A (en) 2020-04-03

Family

ID=69964719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811124723.7A Pending CN110955325A (en) 2018-09-26 2018-09-26 Intelligent glasses focus tracking method and device, intelligent glasses and storage medium

Country Status (1)

Country Link
CN (1) CN110955325A (en)

Similar Documents

Publication Publication Date Title
JP6303297B2 (en) Terminal device, gaze detection program, and gaze detection method
KR20200011405A (en) Systems and Methods for Driver Monitoring
US20210264210A1 (en) Learning data collection device, learning data collection system, and learning data collection method
CN208722145U (en) A kind of intelligent glasses Focus tracking device and intelligent glasses
WO2020063614A1 (en) Smart glasses tracking method and apparatus, and smart glasses and storage medium
CN103765374A (en) Interactive screen viewing
CN107005655A (en) Image processing method
CN112666705A (en) Eye movement tracking device and eye movement tracking method
CN111583343B (en) Visual positioning method, related device, equipment and storage medium
CN108351689B (en) Method and system for displaying a holographic image of an object in a predefined area
CN208689267U (en) A kind of intelligent glasses Focus tracking device and intelligent glasses
JP6221292B2 (en) Concentration determination program, concentration determination device, and concentration determination method
WO2019021601A1 (en) Information processing device, information processing method, and program
CN112255239B (en) Pollution position detection method, device, equipment and computer readable storage medium
KR101395388B1 (en) Apparatus and method for providing augmented reality
US11817900B2 (en) Visible light communication detecting and/or decoding
CN110955325A (en) Intelligent glasses focus tracking method and device, intelligent glasses and storage medium
CN110955043A (en) Intelligent glasses focus tracking method and device, intelligent glasses and storage medium
CN105572869A (en) Intelligent glasses and control method of the intelligent glasses
EP3139586B1 (en) Image shooting processing method and device
CN107958478B (en) Rendering method of object in virtual reality scene and virtual reality head-mounted equipment
US11335177B2 (en) Detection of objects based on change in wireless signal
CN110758237A (en) Electronic device and driving safety reminding method
WO2019239459A1 (en) Driving support device, driving support system, and driving support method
CN106445090B (en) Method and device for controlling cursor and input equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination