CN109255314B - Information prompting method and device, intelligent glasses and storage medium - Google Patents

Information prompting method and device, intelligent glasses and storage medium Download PDF

Info

Publication number
CN109255314B
CN109255314B CN201811001748.8A CN201811001748A CN109255314B CN 109255314 B CN109255314 B CN 109255314B CN 201811001748 A CN201811001748 A CN 201811001748A CN 109255314 B CN109255314 B CN 109255314B
Authority
CN
China
Prior art keywords
target object
information
level
image data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811001748.8A
Other languages
Chinese (zh)
Other versions
CN109255314A (en
Inventor
魏苏龙
林肇堃
麦绮兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811001748.8A priority Critical patent/CN109255314B/en
Publication of CN109255314A publication Critical patent/CN109255314A/en
Application granted granted Critical
Publication of CN109255314B publication Critical patent/CN109255314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Abstract

The embodiment of the application discloses an information prompting method, an information prompting device, intelligent glasses and a storage medium, the method comprises the step of acquiring image data acquired by a camera in real time when an information prompting opening instruction is detected, the camera is integrated on the intelligent glasses, the intelligent glasses are worn on the head of a user, the camera is used for acquiring images in the sight range of the user, the image data is identified to determine a target object and a corresponding information prompting grade, and according to the information prompting grade and the target object, prompting information is generated and prompted, the using function of the intelligent glasses is perfected, and the information acquiring efficiency of the user is improved.

Description

Information prompting method and device, intelligent glasses and storage medium
Technical Field
The embodiment of the application relates to the field of wearable equipment, in particular to an information prompting method and device, intelligent glasses and a storage medium.
Background
With the development of computing devices and the advancement of internet technologies, interaction between users and smart devices is more and more frequent, such as watching movies and television shows by using smart phones, watching television programs by using smart televisions, and checking short messages and physical sign parameters by using smart watches.
The intelligent glasses are one of intelligent devices which are popular with users, the functions of the intelligent glasses are more and more powerful, convenience is provided for daily life of the users, and in the prior art, the information prompting function of the intelligent glasses is not perfect and needs to be improved.
Disclosure of Invention
The application provides an information prompting method and device, intelligent glasses and a storage medium, perfects the use function of the intelligent glasses, and improves the efficiency of obtaining information by a user.
In a first aspect, an embodiment of the present application provides an information prompting method, including:
when an information prompt starting instruction is detected, acquiring image data acquired by a camera in real time, wherein the camera is integrated on intelligent glasses, the intelligent glasses are worn on the head of a user, and the camera is used for acquiring images within the sight range of the user;
identifying the image data to determine a target object and a corresponding information prompt level;
and generating prompt information according to the information prompt level and the target object and prompting.
In a second aspect, an embodiment of the present application further provides an information prompting apparatus, including:
the data acquisition module is used for acquiring image data acquired by a camera in real time when an information prompt starting instruction is detected, the camera is integrated on intelligent glasses, the intelligent glasses are worn on the head of a user, and the camera is used for acquiring images within the sight range of the user;
the data identification module is used for identifying the image data to determine a target object and a corresponding information prompt level;
and the information prompting module is used for generating prompting information according to the information prompting grade and the target object and prompting.
In a third aspect, an embodiment of the present application further provides a pair of smart glasses, including: the information prompting method comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the information prompting method according to the embodiment of the application.
In a fourth aspect, the present application further provides a storage medium containing smart glasses executable instructions, which are used to execute the information prompting method according to the present application when executed by a smart glasses processor.
In the scheme, when an information prompt opening instruction is detected, image data collected by the camera is acquired in real time, the camera is integrated on the intelligent glasses, the intelligent glasses are worn on the head of a user, the camera is used for collecting images in the sight range of the user, the image data is identified and determined to be a target object and a corresponding information prompt level, and prompt information is generated according to the information prompt level and the target object and is prompted.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a flowchart of an information prompting method provided in an embodiment of the present application;
fig. 2 is a flowchart of another information prompting method provided in the embodiment of the present application;
fig. 3 is a flowchart of another information prompting method provided in the embodiment of the present application;
fig. 4 is a flowchart of another information prompting method provided in the embodiment of the present application;
fig. 5 is a flowchart of another information prompting method provided in the embodiment of the present application;
fig. 6 is a flowchart of another information prompting method provided in the embodiment of the present application;
fig. 7 is a block diagram illustrating a structure of an information presentation apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of smart glasses provided in an embodiment of the present application;
fig. 9 is a schematic physical diagram of smart glasses provided in an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration and not limitation. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of an information prompting method provided in an embodiment of the present application, which is applicable to performing information prompting, and the method may be executed by smart glasses provided in an embodiment of the present application, and an information prompting device of the smart glasses may be implemented in a software and/or hardware manner, as shown in fig. 1, a specific scheme provided in this embodiment is as follows:
step S101, when an information prompt opening instruction is detected, image data collected by a camera is obtained in real time, the camera is integrated on intelligent glasses, the intelligent glasses are worn on the head of a user, and the camera is used for collecting images in the sight range of the user.
Wherein, the camera can set up the integration in the picture frame of intelligent glasses, can gather intelligent glasses the place ahead and also realize the image of within range when the user wears intelligent glasses. The information prompt starting instruction is an instruction for triggering acquisition of image data acquired by the camera, and the instruction can be triggered by a preset voice instruction (for example, specific voice sent by a user: 'starting prompt function'), a gesture instruction (for example, specific head gesture change of the user: continuously nodding for 2 times or 3 times), a touch instruction (for example, the intelligent glasses are integrated with a touch panel to detect touch operation of the user, and for example, continuously nodding two touch panels), and the like. The user can wear the intelligent glasses in daily life or in a specific scene (a scene needing prompt information).
The intelligent glasses can be wearable intelligent equipment with an independent system, are integrated with a memory and a processor, have data receiving, data processing and data output functions, and are integrated with hardware devices such as a display screen and a sensor. For example, after a user wears the smart glasses at home and triggers an information prompt opening instruction, the user can correspondingly and automatically receive prompt information in the wearing process.
And S102, identifying the image data to determine a target object and a corresponding information prompt level.
In one embodiment, since the smart glasses are worn at the eyes of the user, the image data is acquired by the camera in real time and observed by the user, for example, when the user wears the smart device at home for daily activities (watching tv, cleaning, cooking, etc.), the image information observed by the sight of the user can be acquired. The image data is identified to determine a target object, where the target object is an article contained in the current image data identified by an image identification algorithm, such as an article visible in a home, such as a television, a refrigerator, a kettle, and the like, and a specific image identification algorithm used in the method may be an image identification algorithm in the prior art. Wherein different target objects correspond to different information prompt levels. In one embodiment, the target object may be unhealthy food or hidden-danger-related object (e.g., electric kettle, kitchen ware, etc.), and accordingly, the information prompting level may be divided into three levels, or may be a hazard level, according to the hazard level or health level of the target object, where the first level is a highest hazard level or a highest health level.
In one embodiment, identifying the image data to determine the target object and the corresponding information cue level comprises: acquiring preset age data, judging whether the age data meets preset conditions or not, and if so, identifying the image data to determine a target object and a corresponding information prompt level. The age data may be data of a user-entered setting, and may be the age of children at home, such as 2 years, 3 years, 5 years, or 8 years. The preset condition that the age meets the preset condition can be that the age data is less than or equal to 10 years old, namely, corresponding image data identification is carried out under the condition that the condition is met to obtain the target object and the corresponding information prompt level.
In one embodiment, identifying the image data to obtain the information cue level of the target object comprises: identifying the image data to obtain a target object and a position of the target object, wherein the position is the position of the target object relative to the ground; and determining an information prompt level according to the target object and the position. Illustratively, if the target object is an electric kettle, when the electric kettle is determined to be placed on the floor through images, the information prompting level is correspondingly determined to be a first level, wherein the information prompting level can be set to be three levels according to different hidden danger levels, the first level is a high hidden danger level, the second level is a medium hidden danger level, and the third level is a low hidden danger level. In another embodiment, identifying the image data to obtain the information cue level of the target object comprises: identifying the image data to obtain the shape of the target object; and determining the corresponding information prompt level according to the shape of the target object. Illustratively, when the target object is identified as a tool, a corresponding information prompting grade is obtained according to the determined shape of the tool, if the shape is pointed, the corresponding information prompting grade is a first grade, wherein the information prompting grade can be set into three grades according to different hidden danger grades, the first grade is a high hidden danger grade, the second grade is a medium hidden danger grade, and the third grade is a low hidden danger grade.
In one embodiment, identifying the image data to determine a target object and a corresponding information cue level comprises: identifying the image data to determine a target object; and querying a preset database to determine the information prompt levels corresponding to the target objects, wherein the preset database records the information prompt levels corresponding to different target objects. In the embodiment, the preset database directly records different target objects and corresponding information prompt levels, and exemplarily, the information prompt level corresponding to the target object cutter is the first level. The information prompting grade can be set into three grades according to different hidden danger grades, wherein the first grade is a high hidden danger grade, the second grade is a medium hidden danger grade, and the third grade is a low hidden danger grade.
In one embodiment, before the querying the preset database to determine the information prompt level corresponding to the target object, the method further includes: acquiring self health data input by a user, and generating a preset database containing a target object and a corresponding information prompt level according to the self health data. The preset database may be generated according to different health data input by the user, that is, the target object and the corresponding information prompt level recorded in the preset database are associated with the health data of the user. Illustratively, the health data input to the user is "pollen allergy", and the corresponding target objects recorded in the preset database may be allergens of pollen allergy and related articles, and the prompting level may be classified into three levels according to the information of the influence degree of the allergens on the pollen allergy, the first level of the target objects has a high influence degree on the pollen allergy, the second level of the target objects has a medium influence degree on the pollen allergy, and the third level of the target objects has a low influence degree on the pollen allergy.
And S103, generating prompt information according to the information prompt level and the target object and prompting.
In one embodiment, different information prompt levels may correspond to different information prompt modes, for example, the information prompt modes include vibration prompt, voice prompt, display screen prompt, and the like, wherein the vibration sensor may generate vibration, the vibration sensor is integrated in a glasses leg of the smart glasses, the voice prompt is played through a speaker, the display screen is a content screen displayed by a display screen of the smart glasses, and the speaker and the display screen are integrated in the smart glasses. In one embodiment, when the prompt information is displayed through the display screen, the name of the target object can be displayed in a text form in the display screen, the corresponding information prompt level is displayed, and the name of the target object and the corresponding information prompt level can be played in a voice broadcast form. The different levels of informational cues characterize the degree of impact of the object on the user, with the imagery including both positive effects (e.g., healthy food) and negative effects (e.g., items with potential safety hazards).
According to the content, when the user wears the intelligent glasses, the target object in the current sight line can be automatically identified, the corresponding information prompt is given, reasonable information can be given in the daily life of the user to assist the user, the information prompt efficiency is high, and the operation of the user is simplified.
Fig. 2 is a flowchart of another information prompting method provided in the embodiment of the present application, and optionally, the identifying the image data to determine the target object and the corresponding information prompting level includes: acquiring preset age data, judging whether the age data meets preset conditions or not, and if so, identifying the image data to determine a target object and a corresponding information prompt level. As shown in fig. 2, the technical solution is as follows:
step S201, when an information prompt opening instruction is detected, image data collected by the camera is obtained in real time.
Step S202, acquiring preset age data.
In one embodiment, the user may preset corresponding age data, which may be the age of the family member's children, such as 1 year, 3 years, 5 years, etc.
And step S203, judging whether the age data meets a preset condition, if so, executing step S204, otherwise, ending.
Before information prompting, whether preset age data meets a preset condition is determined, the preset condition can be that the age data is smaller than 10, and if the preset condition is met, the subsequent steps are correspondingly executed.
And S204, identifying the image data to determine a target object and a corresponding information prompt level.
In one embodiment, the target object obtained by identifying the image data corresponds to different information prompting levels, wherein the information prompting levels are different under different age data aiming at the same target object. For example, assuming that the information prompting level is divided into three levels according to different safety degrees, the unsafe degree of the first level is high, the unsafe degree of the second level is medium, and the unsafe degree of the third level is low, the corresponding unsafe level is reduced with the increase of age.
In one embodiment, identifying the image data to determine a target object and a corresponding information cue level comprises: and if the identified target object is recorded in the database determined by the age data, correspondingly determining the information prompt level of the target object, and if the identified target object is not recorded in the database, not prompting the information. For example, if there is no object that affects the user, the object is not recorded in the database, and if the object in the image data is identified not recorded in the database, the object is filtered accordingly.
And S205, generating prompt information according to the information prompt level and the target object and prompting.
Therefore, when the user wears the intelligent glasses, the target object with the image for the child in the environment can be automatically identified, the information prompt is given, and the potential safety hazard can be reduced.
Fig. 3 is a flowchart of another information prompting method provided in the embodiment of the present application, and optionally, the identifying the image data to obtain the information prompting level of the target object includes: identifying the image data to obtain a target object and a position of the target object, wherein the position is the position of the target object relative to the ground; and determining an information prompt level according to the target object and the position. As shown in fig. 3, the technical solution is as follows:
and S301, when an information prompt opening instruction is detected, acquiring image data acquired by the camera in real time.
Step S302, acquiring preset age data.
And step S303, judging whether the age data meets a preset condition, if so, executing step S304, and if not, ending.
Step S304, identifying the image data to obtain a target object and a position where the target object is located, wherein the position is the position of the target object relative to the ground.
In one embodiment, the position of the target object can be determined according to the image recognition result, whether the target object is located on the ground or on a table, a bed, a windowsill and the like, in another embodiment, a distance value of the target object from the ground can be directly obtained according to the image data to serve as the position of the target object, the distance value range is usually 0-2 meters, and the specific distance measuring method can be realized by adopting the existing image field structured light distance measuring, binocular distance measuring and the like.
And S305, determining an information prompt level according to the target object and the position.
For example, it is assumed that the information presentation level is classified into three levels according to the difference of the safety degree, the first level has a high degree of insecurity, the second level has a medium degree of insecurity, and the third level has a low degree of insecurity, and corresponds to the first level when the object is located on the ground, the second level when the object is located in a bed, a chair, or the like, and the third level when the object is located on a table. Optionally, when the position of the target object is a specific distance value, the distance value is 0-0.5 m corresponding to a first level, 0.5-1 m corresponding to a second level, and more than 1 m corresponding to a third level.
And S306, generating prompt information according to the information prompt level and the target object and prompting.
Therefore, the corresponding information prompt level is determined according to different positions of the target object so as to carry out reasonable prompt, and the prompt information is more targeted.
Fig. 4 is a flowchart of another information prompting method provided in the embodiment of the present application, and optionally, the identifying the image data to obtain the information prompting level of the target object includes: identifying the image data to obtain the shape of the target object; and determining the corresponding information prompt level according to the shape of the target object. As shown in fig. 4, the technical solution is as follows:
step S401, when an information prompt opening instruction is detected, image data collected by the camera is obtained in real time.
Step S402, acquiring preset age data.
And S403, judging whether the age data meets a preset condition, if so, executing S404, and if not, ending.
And S404, recognizing the image data to obtain the shape of the target object.
The shape of the target object may be circular, triangular, rectangular, etc. In one embodiment, the object may be any one or more objects identified in the image data, such as walls, cabinets, dishes, etc. in a room, and whether the object has corners, sharp shapes (e.g., corners of a triangle, rectangle, etc.) or not is identified.
And S405, determining a corresponding information prompt level according to the shape of the target object.
For example, different shapes of the target object correspond to different information prompt levels, specifically, the information prompt level corresponding to the currently identified target object may be obtained through a recorded mapping relationship table, and the mapping relationship table records different information prompt levels corresponding to different shapes, such as a pointed shape corresponding to a first level, and a prism shape corresponding to a second level, where the information prompt level is assumed to be classified into three levels according to different safety degrees, the unsafe degree of the first level is high, the unsafe degree of the second level is medium, and the unsafe degree of the third level is low.
And S406, generating prompt information according to the information prompt level and the target object and prompting.
Wherein, the higher the information prompting grade, the more obvious the corresponding information prompting mode is.
Therefore, the shape of the target object is obtained through recognition, and then the corresponding information prompt level is determined, wherein different shapes correspond to different safety prompt levels, and the user can conveniently find adverse factors in the environment through the given reasonable information prompt.
Fig. 5 is a flowchart of another information prompting method provided in the embodiment of the present application, and optionally, the identifying the image data to determine the target object and the corresponding information prompting level includes: identifying the image data to determine a target object; and querying a preset database to determine the information prompt levels corresponding to the target objects, wherein the preset database records the information prompt levels corresponding to different target objects. The generating and prompting of the prompt information comprises: and generating a name and an associated description of the target object, and displaying the name and the associated description in a display screen of the intelligent glasses. As shown in fig. 5, the technical solution is as follows:
and S501, when an information prompt opening instruction is detected, acquiring image data acquired by the camera in real time.
Step S502, the image data is identified to determine a target object, and a preset database is inquired to determine an information prompt level corresponding to the target object.
In one embodiment, various targets affecting daily life of the user and corresponding information prompt levels are recorded in a preset database, the preset database can be received from a cloud, and the cloud records and summarizes the targets uploaded by the user and the corresponding information prompt levels to be sent to various smart glasses periodically.
Step S503, generating the name and the associated description of the target object according to the information prompt level and the target object, and displaying the name and the associated description in a display screen of the intelligent glasses.
In one embodiment, when a user wears the smart glasses, after the smart glasses recognize a target object existing in a preset database, the name and the associated description of the target object in the acquired image data are correspondingly displayed on a display screen of the smart glasses, and the preset database records the name and the associated description of the target object in addition to the information prompt level corresponding to the target object. The display screen is integrated in a glasses frame of the intelligent glasses, and the user is informed of the target object identified in the collected image data reasonably and timely through the display screen.
According to the method, the preset database comprises the target object, the corresponding information prompt level, the name of the target object and the associated description, the user can recognize the acquired image data by wearing the intelligent glasses and automatically give the corresponding prompt, the use function of the intelligent glasses is improved, and the information acquisition efficiency of the user is improved.
Fig. 6 is a flowchart of another information prompting method provided in the embodiment of the present application, and optionally, before the querying a preset database to determine an information prompting level corresponding to the target object, the method further includes: acquiring self health data input by a user, and generating a preset database containing a target object and a corresponding information prompt level according to the self health data. As shown in fig. 6, the technical solution is as follows:
step S601, when an information prompt opening instruction is detected, image data collected by the camera is obtained in real time.
Step S602, self health data input by a user is obtained, and a preset database containing the target object and the corresponding information prompt level is generated according to the self health data.
In one embodiment, a plurality of different target objects and corresponding information prompt levels may be stored in the smart glasses, wherein the health data input by the user may be unhealthy information such as "cold", "bad eating", and the like, wherein the different health data correspond to the different target objects and the information prompt levels, and after obtaining the self health data input by the user, determining the corresponding preset database according to the health data includes: and determining the target objects corresponding to the health data through a mapping table, and forming the target objects into a preset database, wherein the mapping table can be obtained from a server and obtained through big data learning. For example, the object corresponding to the health data "cold" may be "cold drink", "air conditioner", etc., and the object corresponding to the health data "bad thing", may be "fruit", "meat product", etc.
Step S603, the image data is identified to determine a target object, and a preset database is inquired to determine an information prompt level corresponding to the target object.
And S604, generating prompt information according to the information prompt level and the target object and prompting.
According to the information prompt method and the information prompt device, when the user wears the intelligent glasses to obtain the information prompt, the information prompt of the corresponding target object can be received according to the health condition of the user, so that the user can be assisted to make a reasonable decision and avoid a bad target object.
Fig. 7 is a block diagram of an information presentation apparatus according to an embodiment of the present application, where the apparatus is configured to execute the information presentation method according to the embodiment, and has functional modules and beneficial effects corresponding to the execution method. As shown in fig. 7, the apparatus specifically includes: a data acquisition module 101, a data identification module 102 and an information prompt module 103, wherein,
the data acquisition module 101 is used for acquiring image data acquired by the camera in real time when an information prompt opening instruction is detected, the camera is integrated on the intelligent glasses, the intelligent glasses are worn on the head of a user, and the camera is used for acquiring images within the sight range of the user.
Wherein, the camera can set up the integration in the picture frame of intelligent glasses, can gather intelligent glasses the place ahead and also realize the image of within range when the user wears intelligent glasses. The information prompt starting instruction is an instruction for triggering acquisition of image data acquired by the camera, and the instruction can be triggered by a preset voice instruction (for example, specific voice sent by a user: 'starting prompt function'), a gesture instruction (for example, specific head gesture change of the user: continuously nodding for 2 times or 3 times), a touch instruction (for example, the intelligent glasses are integrated with a touch panel to detect touch operation of the user, and for example, continuously nodding two touch panels), and the like. The user can wear the intelligent glasses in daily life or in a specific scene (a scene needing prompt information).
The intelligent glasses can be wearable intelligent equipment with an independent system, are integrated with a memory and a processor, have data receiving, data processing and data output functions, and are integrated with hardware devices such as a display screen and a sensor. For example, after a user wears the smart glasses at home and triggers an information prompt opening instruction, the user can correspondingly and automatically receive prompt information in the wearing process.
And the data identification module 102 is used for identifying the image data to determine a target object and a corresponding information prompt level.
In one embodiment, since the smart glasses are worn at the eyes of the user, the image data is acquired by the camera in real time and observed by the user, for example, when the user wears the smart device at home for daily activities (watching tv, cleaning, cooking, etc.), the image information observed by the sight of the user can be acquired. The image data is identified to determine a target object, where the target object is an article contained in the current image data identified by an image identification algorithm, such as an article visible in a home, such as a television, a refrigerator, a kettle, and the like, and a specific image identification algorithm used in the method may be an image identification algorithm in the prior art. Wherein different target objects correspond to different information prompt levels. In one embodiment, the target object may be unhealthy food or hidden-danger-related object (e.g., electric kettle, kitchen ware, etc.), and accordingly, the information prompting level may be divided into three levels, or may be a hazard level, according to the hazard level or health level of the target object, where the first level is a highest hazard level or a highest health level.
In one embodiment, identifying the image data to determine the target object and the corresponding information cue level comprises: acquiring preset age data, judging whether the age data meets preset conditions or not, and if so, identifying the image data to determine a target object and a corresponding information prompt level. The age data may be data of a user-entered setting, and may be the age of children at home, such as 2 years, 3 years, 5 years, or 8 years. The preset condition that the age meets the preset condition can be that the age data is less than or equal to 10 years old, namely, corresponding image data identification is carried out under the condition that the condition is met to obtain the target object and the corresponding information prompt level.
In one embodiment, identifying the image data to obtain the information cue level of the target object comprises: identifying the image data to obtain a target object and a position of the target object, wherein the position is the position of the target object relative to the ground; and determining an information prompt level according to the target object and the position. Illustratively, if the target object is an electric kettle, when the electric kettle is determined to be placed on the floor through images, the information prompting level is correspondingly determined to be a first level, wherein the information prompting level can be set to be three levels according to different hidden danger levels, the first level is a high hidden danger level, the second level is a medium hidden danger level, and the third level is a low hidden danger level. In another embodiment, identifying the image data to obtain the information cue level of the target object comprises: identifying the image data to obtain the shape of the target object; and determining the corresponding information prompt level according to the shape of the target object. Illustratively, when the target object is identified as a tool, a corresponding information prompting grade is obtained according to the determined shape of the tool, if the shape is pointed, the corresponding information prompting grade is a first grade, wherein the information prompting grade can be set into three grades according to different hidden danger grades, the first grade is a high hidden danger grade, the second grade is a medium hidden danger grade, and the third grade is a low hidden danger grade.
In one embodiment, identifying the image data to determine a target object and a corresponding information cue level comprises: identifying the image data to determine a target object; and querying a preset database to determine the information prompt levels corresponding to the target objects, wherein the preset database records the information prompt levels corresponding to different target objects. In the embodiment, the preset database directly records different target objects and corresponding information prompt levels, and exemplarily, the information prompt level corresponding to the target object cutter is the first level. The information prompting grade can be set into three grades according to different hidden danger grades, wherein the first grade is a high hidden danger grade, the second grade is a medium hidden danger grade, and the third grade is a low hidden danger grade.
In one embodiment, before the querying the preset database to determine the information prompt level corresponding to the target object, the method further includes: acquiring self health data input by a user, and generating a preset database containing a target object and a corresponding information prompt level according to the self health data. The preset database may be generated according to different health data input by the user, that is, the target object and the corresponding information prompt level recorded in the preset database are associated with the health data of the user. Illustratively, the health data input to the user is "pollen allergy", and the corresponding target objects recorded in the preset database may be allergens of pollen allergy and related articles, and the prompting level may be classified into three levels according to the information of the influence degree of the allergens on the pollen allergy, the first level of the target objects has a high influence degree on the pollen allergy, the second level of the target objects has a medium influence degree on the pollen allergy, and the third level of the target objects has a low influence degree on the pollen allergy.
And the information prompting module 103 is used for generating and prompting prompt information according to the information prompting grade and the target object.
In one embodiment, different information prompt levels may correspond to different information prompt modes, for example, the information prompt modes include vibration prompt, voice prompt, display screen prompt, and the like, wherein the vibration sensor may generate vibration, the vibration sensor is integrated in a glasses leg of the smart glasses, the voice prompt is played through a speaker, the display screen is a content screen displayed by a display screen of the smart glasses, and the speaker and the display screen are integrated in the smart glasses. In one embodiment, when the prompt information is displayed through the display screen, the name of the target object can be displayed in a text form in the display screen, the corresponding information prompt level is displayed, and the name of the target object and the corresponding information prompt level can be played in a voice broadcast form. The different levels of informational cues characterize the degree of impact of the object on the user, with the imagery including both positive effects (e.g., healthy food) and negative effects (e.g., items with potential safety hazards).
According to the content, when the user wears the intelligent glasses, the target object in the current sight line can be automatically identified, the corresponding information prompt is given, reasonable information can be given in the daily life of the user to assist the user, the information prompt efficiency is high, and the operation of the user is simplified.
In a possible embodiment, the data identification module is specifically configured to:
acquiring preset age data, judging whether the age data meets preset conditions or not, and if so, identifying the image data to determine a target object and a corresponding information prompt level.
In a possible embodiment, the data identification module is specifically configured to:
identifying the image data to obtain a target object and a position of the target object, wherein the position is the position of the target object relative to the ground;
and determining an information prompt level according to the target object and the position.
In a possible embodiment, the data identification module is specifically configured to:
identifying the image data to obtain the shape of the target object;
and determining the corresponding information prompt level according to the shape of the target object.
In a possible embodiment, the data identification module is specifically configured to:
identifying the image data to determine a target object;
and querying a preset database to determine the information prompt levels corresponding to the target objects, wherein the preset database records the information prompt levels corresponding to different target objects.
In one possible embodiment, the data acquisition module is further configured to:
and before the preset information prompt grade corresponding to the target object is determined by inquiring the preset database, self health data input by a user is acquired, and a preset database containing the target object and the corresponding information prompt grade is generated according to the self health data.
In a possible embodiment, the information prompting module is specifically configured to:
and generating a name and an associated description of the target object, and displaying the name and the associated description in a display screen of the intelligent glasses.
In this embodiment, on the basis of the foregoing embodiments, a pair of smart glasses is provided, fig. 8 is a schematic structural diagram of a pair of smart glasses provided in an embodiment of the present application, and fig. 9 is a schematic physical diagram of a pair of smart glasses provided in an embodiment of the present application, and as shown in fig. 8 and fig. 9, the pair of smart glasses includes: memory 201, a processor (CPU) 202, a display Unit 203, a touch panel 204, a heart rate detection module 205, a distance sensor 206, a camera 207, a bone conduction speaker 208, a microphone 209, a breathing light 210, which communicate via one or more communication buses or signal lines 211.
It should be understood that the illustrated smart glasses are merely one example, and that the smart glasses may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail the smart glasses for implementing information prompt provided in this embodiment.
A memory 201, the memory 201 being accessible by the CPU202, the memory 201 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
The display component 203 can be used for displaying image data and a control interface of an operating system, the display component 203 is embedded in a frame of the intelligent glasses, an internal transmission line 211 is arranged inside the frame, and the internal transmission line 211 is connected with the display component 203.
And a touch panel 204, the touch panel 204 being disposed at an outer side of at least one smart glasses temple for acquiring touch data, the touch panel 204 being connected to the CPU202 through an internal transmission line 211. The touch panel 204 can detect finger sliding and clicking operations of the user, and accordingly transmit the detected data to the processor 202 for processing to generate corresponding control instructions, which may be, for example, a left shift instruction, a right shift instruction, an up shift instruction, a down shift instruction, and the like. Illustratively, the display part 203 may display the virtual image data transmitted by the processor 202, and the virtual image data may be correspondingly changed according to the user operation detected by the touch panel 204, specifically, the virtual image data may be switched to a previous or next virtual image frame when a left shift instruction or a right shift instruction is detected; when the display section 203 displays video play information, the left shift instruction may be to perform playback of the play content, and the right shift instruction may be to perform fast forward of the play content; when the editable text content is displayed on the display part 203, the left shift instruction, the right shift instruction, the upward shift instruction, and the downward shift instruction may be displacement operations on a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display part 203 is a game moving picture, the left shift instruction, the right shift instruction, the upward shift instruction and the downward shift instruction can be used for controlling an object in a game, for example, in an airplane game, the flying direction of an airplane can be controlled by the left shift instruction, the right shift instruction, the upward shift instruction and the downward shift instruction respectively; when the display part 203 can display video pictures of different channels, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction can perform switching of different channels, wherein the up shift instruction and the down shift instruction can be switching to a preset channel (such as a common channel used by a user); when the display section 203 displays a still picture, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction may perform switching between different pictures, where the left shift instruction may be switching to a previous picture, the right shift instruction may be switching to a next picture, the up shift instruction may be switching to a previous set, and the down shift instruction may be switching to a next set. The touch panel 204 can also be used to control display switches of the display section 203, for example, when the touch area of the touch panel 204 is pressed for a long time, the display section 203 is powered on to display an image interface, when the touch area of the touch panel 204 is pressed for a long time again, the display section 203 is powered off, and when the display section 203 is powered on, the brightness or resolution of an image displayed in the display section 203 can be adjusted by performing a slide-up and slide-down operation on the touch panel 204.
Heart rate detection module 205 for measure user's heart rate data, the heart rate indicates the heartbeat number of minute, and this heart rate detection module 205 sets up at the mirror leg inboard. Specifically, the heart rate detection module 205 may obtain human body electrocardiographic data by using a dry electrode in an electric pulse measurement manner, and determine the heart rate according to an amplitude peak value in the electrocardiographic data; this heart rate detection module 205 can also be by adopting the light transmission and the light receiver component of photoelectric method measurement rhythm of the heart, and is corresponding, and this heart rate detection module 205 sets up in the mirror leg bottom, the earlobe department of human auricle. Heart rate detection module 205 can be corresponding after gathering heart rate data send to processor 202 and carry out data processing and have obtained the current heart rate value of wearer, in an embodiment, processor 202 can show this heart rate value in real time in display component 203 after determining user's heart rate value, optional processor 202 can be corresponding trigger alarm when determining that heart rate value is lower (for example less than 50) or higher (for example more than 100), send this heart rate value and/or the alarm information that generates to the server through communication module simultaneously.
And a distance sensor 206, which may be disposed on the frame, wherein the distance sensor 206 is used for sensing the distance from the human face to the frame, and the distance sensor 206 may be implemented by using an infrared sensing principle. Specifically, the distance sensor 206 transmits the acquired distance data to the processor 202, and the processor 202 controls the brightness of the display section 203 according to the distance data. Illustratively, the processor 202 controls the display 203 to be in an on state when the distance sensor 206 detects a distance of less than 5 cm, and controls the display 204 to be in an off state when the distance sensor detects an object approaching.
And the breathing lamp 210 can be arranged at the edge of the frame, and when the display part 203 closes the display screen, the breathing lamp 210 can be lightened to be in a gradual dimming effect according to the control of the processor 202.
The camera 207 may be a front camera module disposed at the upper frame of the frame for collecting image data in front of the user, a rear camera module for collecting eyeball information of the user, or a combination thereof. Specifically, when the camera 207 collects a front image, the collected image is sent to the processor 202 for recognition and processing, and a corresponding trigger event is triggered according to a recognition result. Illustratively, when a user wears the wearable device at home, by identifying the collected front image, if a furniture item is identified, correspondingly inquiring whether a corresponding control event exists, if so, correspondingly displaying a control interface corresponding to the control event in the display part 203, and the user can control the corresponding furniture item through the touch panel 204, wherein the furniture item and the smart glasses are in network connection through bluetooth or wireless ad hoc network; when a user wears the wearable device outdoors, a target recognition mode can be correspondingly started, the target recognition mode can be used for recognizing specific people, the camera 207 sends collected images to the processor 202 for face recognition processing, if preset faces are recognized, voice broadcasting can be correspondingly conducted through a loudspeaker integrated with the intelligent glasses, the target recognition mode can also be used for recognizing different plants, for example, the processor 202 records current images collected by the camera 207 according to touch operation of the touch panel 204 and sends the current images to the server through the communication module for recognition, the server recognizes the plants in the collected images and feeds back related plant names to the intelligent glasses, and feedback data are displayed in the display part 203. The camera 207 may also be configured to capture an image of an eye of a user, such as an eyeball, and generate different control instructions by recognizing rotation of the eyeball, for example, the eyeball rotates upward to generate an upward movement control instruction, the eyeball rotates downward to generate a downward movement control instruction, the eyeball rotates leftward to generate a left movement control instruction, and the eyeball rotates rightward to generate a right movement control instruction, where the display unit 203 may display, as appropriate, virtual image data transmitted by the processor 202, where the virtual image data may be changed according to a control instruction generated by a change in movement of the eyeball of the user detected by the camera 207, specifically, a frame switching may be performed, and when a left movement control instruction or a right movement control instruction is detected, a previous or next virtual image frame may be correspondingly switched; when the display part 203 displays video playing information, the left control instruction can be to play back the played content, and the right control instruction can be to fast forward the played content; when the editable text content is displayed on the display part 203, the left movement control instruction, the right movement control instruction, the upward movement control instruction and the downward movement control instruction may be displacement operations of a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display part 203 is a game animation picture, the left movement control command, the right movement control command, the upward movement control command and the downward movement control command can control an object in a game, for example, in an airplane game, the flying direction of an airplane can be controlled by the left movement control command, the right movement control command, the upward movement control command and the downward movement control command respectively; when the display part 203 can display video pictures of different channels, the left shift control instruction, the right shift control instruction, the upward shift control instruction and the downward shift control instruction can switch different channels, wherein the upward shift control instruction and the downward shift control instruction can be switching to a preset channel (such as a common channel used by a user); when the display section 203 displays a still picture, the left shift control instruction, the right shift control instruction, the up shift control instruction, and the down shift control instruction may switch between different pictures, where the left shift control instruction may be to a previous picture, the right shift control instruction may be to a next picture, the up shift control instruction may be to a previous picture set, and the down shift control instruction may be to a next picture set.
And a bone conduction speaker 208, the bone conduction speaker 208 being provided on an inner wall side of at least one temple, for converting the received audio signal transmitted from the processor 202 into a vibration signal. The bone conduction speaker 208 transmits sound to the inner ear of the human body through the skull, converts an electrical signal of the audio frequency into a vibration signal, transmits the vibration signal into the cochlea of the skull, and then is sensed by the auditory nerve. The bone conduction speaker 208 is used as a sound production device, so that the thickness of a hardware structure is reduced, the weight is lighter, meanwhile, the influence of electromagnetic radiation is avoided when no electromagnetic radiation exists, and the bone conduction speaker has the advantages of noise resistance, water resistance and capability of freeing ears.
A microphone 209 may be disposed on the lower frame of the frame for capturing external (user, ambient) sounds and transmitting them to the processor 202 for processing. Illustratively, the microphone 209 collects the sound emitted by the user and performs voiceprint recognition by the processor 202, and if the sound is recognized as a voiceprint for authenticating the user, the subsequent voice control can be correspondingly received, specifically, the user can emit voice, the microphone 209 sends the collected voice to the processor 202 for recognition so as to generate a corresponding control instruction according to the recognition result, such as "power on", "power off", "display brightness increase", "display brightness decrease", and the processor 202 subsequently executes a corresponding control process according to the generated control instruction.
The information prompting device for the intelligent glasses and the intelligent glasses provided in the above embodiments can execute the information prompting method for the intelligent glasses provided in any embodiment of the present invention, and have corresponding functional modules and beneficial effects for executing the method. For technical details that are not described in detail in the above embodiments, reference may be made to the information prompting method of the smart glasses provided in any embodiment of the present invention.
Embodiments of the present application also provide a storage medium containing smart glasses executable instructions, which when executed by a smart glasses processor, are configured to perform an information prompting method, including:
when an information prompt starting instruction is detected, acquiring image data acquired by a camera in real time, wherein the camera is integrated on intelligent glasses, the intelligent glasses are worn on the head of a user, and the camera is used for acquiring images within the sight range of the user;
identifying the image data to determine a target object and a corresponding information prompt level;
and generating prompt information according to the information prompt level and the target object and prompting.
In one possible embodiment, the identifying the image data to determine the target object and the corresponding information cue level comprises:
acquiring preset age data, judging whether the age data meets preset conditions or not, and if so, identifying the image data to determine a target object and a corresponding information prompt level.
In a possible embodiment, the identifying the image data to obtain the information cue level of the target object includes:
identifying the image data to obtain a target object and a position of the target object, wherein the position is the position of the target object relative to the ground;
and determining an information prompt level according to the target object and the position.
In a possible embodiment, the identifying the image data to obtain the information cue level of the target object includes:
identifying the image data to obtain the shape of the target object;
and determining the corresponding information prompt level according to the shape of the target object.
In one possible embodiment, the identifying the image data to determine the target object and the corresponding information cue level comprises:
identifying the image data to determine a target object;
and querying a preset database to determine the information prompt levels corresponding to the target objects, wherein the preset database records the information prompt levels corresponding to different target objects.
In a possible embodiment, before the querying the preset database to determine the information prompt level corresponding to the target object, the method further includes:
acquiring self health data input by a user, and generating a preset database containing a target object and a corresponding information prompt level according to the self health data.
In a possible embodiment, the generating and prompting the prompt information includes:
generating a name and an associated description of the target object, and displaying the name and the associated description in a display screen of the smart glasses
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are prompted by information. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the information prompting method described above, and may also execute related operations in the information prompting method provided in any embodiments of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. The information prompting method is characterized by being applied to intelligent glasses and comprising the following steps:
when an information prompt starting instruction is detected, acquiring image data which are acquired by a camera and observed by a user in real time, wherein the camera is integrated on intelligent glasses, the intelligent glasses are worn on the head of the user, and the camera is used for acquiring images in the sight range of the user;
identifying the image data to determine a target object, and filtering out the target object which has no safety influence on a user and/or the personnel associated with the user in the image data in the identification process; determining the information prompt level corresponding to each filtered target object; the information prompt grade represents the hazard degree or health degree of the target object;
and generating prompt information according to the information prompt level and the target object and prompting through a display picture, wherein the display picture is a content picture displayed by a display screen of the intelligent glasses and comprises the name of each target object and the corresponding information prompt level.
2. The method of claim 1, wherein identifying the image data to determine the target object and the corresponding information cue level comprises:
acquiring preset age data, judging whether the age data meets preset conditions or not, and if so, identifying the image data to determine a target object and a corresponding information prompt level.
3. The method of claim 2, wherein identifying the image data to obtain an information cue level for the object comprises:
identifying the image data to obtain a target object and a position of the target object, wherein the position is the position of the target object relative to the ground;
and determining an information prompt level according to the target object and the position.
4. The method of claim 2, wherein identifying the image data to obtain an information cue level for the object comprises:
identifying the image data to obtain the shape of the target object;
and determining the corresponding information prompt level according to the shape of the target object.
5. The method of claim 1, wherein identifying the image data to determine the target object and the corresponding information cue level comprises:
identifying the image data to determine a target object;
and querying a preset database to determine the information prompt levels corresponding to the target objects, wherein the preset database records the information prompt levels corresponding to different target objects.
6. The method according to claim 5, before the querying the preset database to determine the information prompt level corresponding to the target object, further comprising:
acquiring self health data input by a user, and generating a preset database containing a target object and a corresponding information prompt level according to the self health data.
7. The method according to any one of claims 1-6, wherein the generating and prompting prompt information comprises:
and generating a name and an associated description of the target object, and displaying the name and the associated description in a display screen of the intelligent glasses.
8. Information prompt device, its characterized in that is applied to intelligent glasses, includes:
the data acquisition module is used for acquiring image data acquired by a camera and observed by a user in real time when an information prompt starting instruction is detected, the camera is integrated on intelligent glasses, the intelligent glasses are worn on the head of the user, and the camera is used for acquiring images within the sight range of the user;
the data identification module is used for identifying the image data to determine a target object and filtering out the target object which has no safety influence on the user and/or the personnel related to the user in the image data in the identification process; determining the information prompt level corresponding to each filtered target object; the information prompt grade represents the hazard degree or health degree of the target object;
and the information prompting module is used for generating prompting information according to the information prompting level and the target object and prompting through a display picture, wherein the display picture is a content picture displayed by a display screen of the intelligent glasses, and the display picture comprises the name of each target object and the corresponding information prompting level.
9. A smart eyewear comprising: processor, memory and computer program stored on the memory and executable on the processor, characterized in that the processor implements the information prompting method according to any one of claims 1-7 when executing the computer program.
10. A storage medium containing smart-glasses-executable instructions, which when executed by a smart-glasses processor, are configured to perform the information prompting method of any of claims 1-7.
CN201811001748.8A 2018-08-30 2018-08-30 Information prompting method and device, intelligent glasses and storage medium Active CN109255314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811001748.8A CN109255314B (en) 2018-08-30 2018-08-30 Information prompting method and device, intelligent glasses and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811001748.8A CN109255314B (en) 2018-08-30 2018-08-30 Information prompting method and device, intelligent glasses and storage medium

Publications (2)

Publication Number Publication Date
CN109255314A CN109255314A (en) 2019-01-22
CN109255314B true CN109255314B (en) 2021-07-02

Family

ID=65050344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811001748.8A Active CN109255314B (en) 2018-08-30 2018-08-30 Information prompting method and device, intelligent glasses and storage medium

Country Status (1)

Country Link
CN (1) CN109255314B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112445321A (en) * 2019-08-28 2021-03-05 菜鸟智能物流控股有限公司 Article processing method and device and electronic equipment
US10838056B1 (en) 2019-12-25 2020-11-17 NextVPU (Shanghai) Co., Ltd. Detection of target
CN112883883A (en) * 2021-02-26 2021-06-01 北京蜂巢世纪科技有限公司 Information presentation method, information presentation device, information presentation medium, glasses, and program product
CN115472039B (en) * 2021-06-10 2024-03-01 上海博泰悦臻网络技术服务有限公司 Information processing method and related product

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2453816C (en) * 2003-12-22 2010-05-18 Shih-Ming Hwang Intelligent microwave detecting system
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
US9870716B1 (en) * 2013-01-26 2018-01-16 Ip Holdings, Inc. Smart glasses and smart watches for real time connectivity and health
CN103646587B (en) * 2013-12-05 2017-02-22 北京京东方光电科技有限公司 deaf-mute people
CN104881949B (en) * 2015-04-24 2018-05-29 广东欧珀移动通信有限公司 A kind of road condition monitoring method and device based on mobile terminal
CN105825568A (en) * 2016-03-16 2016-08-03 广东威创视讯科技股份有限公司 Portable intelligent interactive equipment
CN106231419A (en) * 2016-08-30 2016-12-14 北京小米移动软件有限公司 Operation performs method and device
CN106372610B (en) * 2016-09-05 2020-02-14 深圳市联谛信息无障碍有限责任公司 Intelligent glasses-based foreground information prompting method and intelligent glasses
CN107480851A (en) * 2017-06-29 2017-12-15 北京小豆儿机器人科技有限公司 A kind of intelligent health management system based on endowment robot
CN107464428A (en) * 2017-09-22 2017-12-12 长安大学 A kind of tunnel traffic anomalous identification warning device
CN108064447A (en) * 2017-11-29 2018-05-22 深圳前海达闼云端智能科技有限公司 Method for displaying image, intelligent glasses and storage medium

Also Published As

Publication number Publication date
CN109255314A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
CN109255314B (en) Information prompting method and device, intelligent glasses and storage medium
US10582328B2 (en) Audio response based on user worn microphones to direct or adapt program responses system and method
JP7016263B2 (en) Systems and methods that enable communication through eye feedback
US20180123813A1 (en) Augmented Reality Conferencing System and Method
US20180124497A1 (en) Augmented Reality Sharing for Wearable Devices
US20130194177A1 (en) Presentation control device and presentation control method
US20170192620A1 (en) Head-mounted display device and control method therefor
CN109259724B (en) Eye monitoring method and device, storage medium and wearable device
TW201535155A (en) Remote device control via gaze detection
CN109061903B (en) Data display method and device, intelligent glasses and storage medium
CN109145847B (en) Identification method and device, wearable device and storage medium
CN109224432B (en) Entertainment application control method and device, storage medium and wearable device
CN109040462A (en) Stroke reminding method, apparatus, storage medium and wearable device
US11016558B2 (en) Information processing apparatus, and information processing method to determine a user intention
CN109241900B (en) Wearable device control method and device, storage medium and wearable device
CN110069652A (en) Reminding method, device, storage medium and wearable device
CN109068126B (en) Video playing method and device, storage medium and wearable device
US20210081047A1 (en) Head-Mounted Display With Haptic Output
JPWO2018163637A1 (en) Information processing apparatus, information processing method, and recording medium
JPWO2017098780A1 (en) Information processing apparatus, information processing method, and program
CN109067627A (en) Appliances equipment control method, device, wearable device and storage medium
CN109117819B (en) Target object identification method and device, storage medium and wearable device
US11544968B2 (en) Information processing system, information processingmethod, and recording medium
US20240095948A1 (en) Self-tracked controller
CN109257490A (en) Audio-frequency processing method, device, wearable device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant