CN111475733B - Information prompting method and intelligent glasses - Google Patents
Information prompting method and intelligent glasses Download PDFInfo
- Publication number
- CN111475733B CN111475733B CN202010291839.0A CN202010291839A CN111475733B CN 111475733 B CN111475733 B CN 111475733B CN 202010291839 A CN202010291839 A CN 202010291839A CN 111475733 B CN111475733 B CN 111475733B
- Authority
- CN
- China
- Prior art keywords
- information
- target
- glasses
- intelligent glasses
- prompt information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000011521 glass Substances 0.000 title claims abstract description 150
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000004984 smart glass Substances 0.000 claims abstract description 67
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004140 cleaning Methods 0.000 abstract description 25
- 238000012423 maintenance Methods 0.000 abstract description 16
- 230000000694 effects Effects 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000036544 posture Effects 0.000 description 58
- 230000006870 function Effects 0.000 description 11
- 238000003825 pressing Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003252 repetitive effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000012459 cleaning agent Substances 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C11/00—Non-optical adjuncts; Attachment thereof
- G02C11/10—Electronic devices other than hearing aids
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Acoustics & Sound (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Ophthalmology & Optometry (AREA)
- Optics & Photonics (AREA)
- Eyeglasses (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides an information prompting method and intelligent glasses. The method comprises the following steps: receiving a first input to the smart glasses; responding to the first input, and outputting target prompt information under the condition that the target position of the first input meets a first preset condition; the target prompt information comprises placement posture information of the intelligent glasses, and at least one of commodity information of the lenses of the intelligent glasses is cleaned. The intelligent glasses can output the placing gesture information of the intelligent glasses and/or commodity information for cleaning the lenses of the intelligent glasses, so that the problem that the maintenance effect of the intelligent glasses is poor due to subjective maintenance of the intelligent glasses by users in the related technology is solved.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to an information prompting method and intelligent glasses.
Background
Smart glasses are widely used by users as a type of wearable device.
At present, for maintenance of the intelligent glasses, the user subjective awareness is mainly relied on, and the user subjective maintenance of the intelligent glasses is only relied on, so that a better maintenance effect is difficult to achieve.
Disclosure of Invention
The embodiment of the invention provides an information prompting method and intelligent glasses, which are used for solving the problem that the intelligent glasses are poor in maintenance effect caused by subjective maintenance of the intelligent glasses by a user in the related technology.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information prompting method, which is applied to smart glasses, where the method includes:
receiving a first input to the smart glasses;
responding to the first input, and outputting target prompt information under the condition that the target position of the first input meets a first preset condition;
the target prompt information comprises placement posture information of the intelligent glasses, and at least one of commodity information of the lenses of the intelligent glasses is cleaned.
In a second aspect, an embodiment of the present invention further provides a smart glasses, including:
the receiving module is used for receiving a first input to the intelligent glasses;
the output module is used for responding to the first input and outputting target prompt information under the condition that the target position of the first input meets a first preset condition;
the target prompt information comprises placement posture information of the intelligent glasses, and at least one of commodity information of the lenses of the intelligent glasses is cleaned.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: the information prompting device comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the computer program realizes the steps of the information prompting method when being executed by the processor.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements the steps of the information prompting method.
In the embodiment of the invention, the first input of the target position of the lens of the intelligent glasses is received, and the placing gesture information of the intelligent glasses matched with the target position is output and/or the commodity information of the lenses of the intelligent glasses is cleaned under the condition that the target position of the intelligent glasses is in a dirty-prone environment, so that the placing gesture information of the intelligent glasses is prompted to a user and/or related commodities for cleaning the lenses of the intelligent glasses are timely recommended to the user under the condition that the target position of the intelligent glasses meets a first preset condition. According to the method provided by the embodiment of the invention, under the condition that the lenses are stained, the proper placement of the glasses can be prompted to avoid the stain postures of the lenses of the glasses, and/or commodity information recommended to a user for cleaning the lenses is/are realized, so that the automatic prompt of the posture placement of the lenses of the intelligent glasses and/or maintenance information of cleaning the commodities is realized, and the problem that the lenses are stained due to the fact that the placement postures of the intelligent glasses are not aligned in time is difficult to find when the user subjectively maintains the intelligent glasses in the related art is solved, and the maintenance effect on the intelligent glasses can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a message prompting method of one embodiment of the present invention;
FIG. 2 is a block diagram of smart glasses according to one embodiment of the present invention;
fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of an information prompting method according to an embodiment of the present invention is shown and applied to smart glasses, and the method specifically may include the following steps:
Step 101, receiving a first input to the smart glasses:
the first input may be any input form such as a pressing input and a sliding input, and the target position of the triggering of the first input may be any position of the smart glasses.
Step 102, responding to the first input, and outputting target prompt information under the condition that the target position of the first input meets a first preset condition;
the target prompt information comprises placement posture information of the intelligent glasses, and at least one of commodity information of the lenses of the intelligent glasses is cleaned.
The expression form of the placement gesture information can be at least one of an image, a text and a voice. And the expression form of commodity information of the cleaning intelligent glasses lenses can be purchasing links and the like.
In addition, the placement posture information in the target prompt information is placement posture information matching the target position.
The target position of the first input may be a trigger position of the first input on the smart glasses, and then the target position meeting the first preset condition may be understood that the target position is touched by another person or an article, so that the smart glasses need to be cleaned, and/or the placement posture of the smart glasses needs to be adjusted.
As described above, the expression form of the placement posture information in the target prompt information may be at least one of an image, a text, and a voice. If the expression form of the placement gesture information comprises characters and/or pictures, the characters and/or pictures can be displayed on the screen of the intelligent glasses when the target prompt information is output; if the presentation form of the placement pose information includes a voice, the voice may be output through an audio output interface (e.g., speaker or earphone) of the smart glasses.
In the embodiment of the invention, the first input of the target position of the lens of the intelligent glasses is received, and the placing gesture information of the intelligent glasses matched with the target position is output and/or the commodity information of the lenses of the intelligent glasses is cleaned under the condition that the target position of the intelligent glasses is in a dirty-prone environment, so that the placing gesture information of the intelligent glasses is prompted to a user and/or related commodities for cleaning the lenses of the intelligent glasses are timely recommended to the user under the condition that the target position of the intelligent glasses meets a first preset condition. According to the method provided by the embodiment of the invention, under the condition that the lenses are stained, the proper placement of the glasses can be prompted to avoid the stain postures of the lenses of the glasses, and/or commodity information recommended to a user for cleaning the lenses is/are realized, so that the automatic prompt of the posture placement of the lenses of the intelligent glasses and/or maintenance information of cleaning the commodities is realized, and the problem that the lenses are stained due to the fact that the placement postures of the intelligent glasses are not aligned in time is difficult to find when the user subjectively maintains the intelligent glasses in the related art is solved, and the maintenance effect on the intelligent glasses can be improved.
Optionally, when executing step 102, the target prompt information may be output if there is a target prompt information matching the target position in a corresponding relationship between the preset position information and the prompt information in response to the first input.
The intelligent glasses side can be preset with a first corresponding relation between the position information of the intelligent glasses and the placing gesture information of the intelligent glasses, and/or preset with a second corresponding relation between the position information of the intelligent glasses and commodity information for cleaning the lenses of the intelligent glasses;
the same location information as well as different location information may exist between the two correspondences.
In this step, by querying the target position in the local first correspondence and/or the second correspondence, if the target position is queried, the placement posture information of the smart glasses matched with the target position and/or the merchandise information of the smart glasses for cleaning can be output.
In the embodiment of the invention, under the condition that the first input to the intelligent glasses is received, if the target prompt information matched with the target position exists in the corresponding relation between the preset position information and the prompt information, the target position is indicated to meet the first preset condition, so that the target prompt information matched with the target position in the corresponding relation can be output, and the automatic prompt effect of the maintenance information of the intelligent glasses is improved.
Optionally, the position information in the preset correspondence may include at least one of a lens surface and a coordinate range, and the target position may include at least one of a target surface and a target coordinate of the lens;
therefore, when the step of outputting the target prompt information if the target prompt information matched with the target position exists in the corresponding relation between the preset position information and the prompt information is executed, if the lens surface is the same as the target surface in the corresponding relation between the preset position information and the prompt information, or if the target position information with the coordinate range including the target coordinate exists, outputting the target prompt information matched with the target position information in the corresponding relation.
That is, if the lens surface identical to the target surface exists in the preset corresponding relation, outputting target prompt information matched with the target surface in the corresponding relation; and if the coordinate range including the target coordinate exists in the preset corresponding relation, outputting target prompt information matched with the coordinate range (the coordinate range including the target coordinate) in the corresponding relation.
The lens of the intelligent glasses may include a first surface (e.g., an inner surface) and a second surface (e.g., an outer surface), and if the target surface corresponding to the target position is the first surface, a first prompt message is output; outputting second prompt information if the target surface corresponding to the target position is the second surface;
wherein the first prompt information and the second prompt information are different.
Therefore, different prompt messages can be output according to the difference of the surfaces of the lenses of the intelligent glasses, which are touched.
In addition, if the target position corresponds to the first coordinate range of the first surface in the preset corresponding relation, outputting third prompt information; and outputting fourth prompt information if the target position corresponds to the second coordinate range of the first surface in the preset corresponding relation.
That is, different prompt messages can be output for different area positions of the same surface according to the lens surface and the coordinate range of the intelligent glasses.
The foregoing technical solution is described in detail below with a specific example for easy understanding.
For example, the first input is a pressing input, the pressed lens may be any lens of the smart glasses, and the target position may be any position of the lens, for example, any position of an outer surface or an inner surface of the lens or a lens frame (i.e., a side wall).
The pressing input indicates a pressing input having a pressing pressure greater than a preset pressure threshold, and may be a short-time pressing or a long-time pressing.
The lens may incorporate a pressure sensor therein, and the pressure sensor detects a pressing input to the lens.
Since the pressing input to the lens is received, the target position corresponding to the pressing input can be identified, and the target position is used for detecting the first corresponding relation between the pre-stored position information and the placement posture information, for example, so that the placement posture information matched with the target position can be conveniently obtained from the first corresponding relation.
Wherein, the position information in the first correspondence relationship may include: the intelligent glasses comprise position coordinates corresponding to the inner surfaces of lenses of the intelligent glasses, position coordinates corresponding to the outer surfaces of the lenses of the intelligent glasses and position coordinates corresponding to side walls of the intelligent glasses.
The position coordinate may be a coordinate point or a coordinate range.
In other words, the method of the embodiment of the present invention may prestore the correspondence between each position coordinate of the lens in the smart glasses and the placement posture information, where the correspondence may be a correspondence that includes one-to-one and/or many-to-one (i.e., the plurality of position coordinates correspond to the same placement posture information).
In one example, when a user wipes the smart glasses with the glasses cloth or places the smart glasses in an environment in contact with an external object, the lens surface (which can distinguish between the outer surface and the inner surface) of the smart glasses detects the pressure applied by the external environment to the lenses of the smart glasses, and it can be determined whether the pressure is greater than a threshold value, if so, it is determined that the smart glasses are in a dirtible environment, and at the same time, the smart glasses record the coordinates of the pressure applied by the external environment to the glasses (i.e., the coordinates of the location where the pressure input is received), and retrieve the coordinates of the location in a coordinate range in the first correspondence stored in advance, for example, if the coordinates of the location hit a certain coordinate range in the correspondence, the placement posture information matched with the coordinate range is obtained.
For example, the pre-stored first correspondence may be a table that is preset by the smart glasses manufacturer and corresponds to the correct placement posture of the glasses, i.e. each coordinate range corresponds to one correct placement posture.
In addition, the expression form of the glasses placement gesture in the first correspondence relationship may be at least one of an image, a text, and a voice.
In an alternative embodiment, the smart glasses detect that a certain coordinate range of the outer side (the side of the lens far from the eyes) surface of the lens is pressed, and the coordinate range corresponds to the placement gesture information in the first corresponding relation, and then prompt the user to open and place the lens holder. Where "open frame" may be expressed in terms of speech, text, or pictures (a frame open eyeglass placement diagram).
The expression "open frame" or "open frame" speech, text or picture is understood to mean the corresponding glasses placement gesture of the coordinate range in the corresponding relationship. That is, the placement posture information in the first correspondence may be information expressing the correct glasses posture.
In addition, the placement posture information in the first correspondence may also be information expressing that the wrong glasses placement posture is not to be adopted, and the user may also be prompted to "the lens is not to be in contact with the object" as in the above example one.
In an alternative embodiment, the smart glasses detect that the inner side of the lens (the lens on the side close to the eyes), i.e. a certain coordinate range of the inner surface, is pressed, the user is prompted to "put the smart glasses in the glasses case" or to "fold the frame in place".
Alternatively, since the position information in the preset first correspondence may be the position coordinates of the outer surface, the inner surface and the side wall of the lens, the coordinate ranges of the inner surface, the outer surface and the respective easy-to-dirty parts of the side wall of the lens of the smart glasses may be determined based on the same coordinate system, so that the coordinate ranges of the easy-to-dirty parts of the inner surface and the easy-to-dirty parts of the outer surface and the easy-to-dirty parts of the side wall in the first correspondence are not overlapped. In addition, different areas of the smart glasses (i.e., the inner surface, the outer surface and the side wall of the lens) can be used as independent objects to establish a coordinate system, so as to determine the coordinate ranges of the dirty parts in the inner surface, the outer surface and the side wall of the lens of the smart glasses. Because the coordinate systems of the inner surface, the outer surface and the side wall are different, the preset first corresponding relation can comprise a corresponding relation of the coordinate range of the easy-to-dirty part of the outer surface of the lens and the glasses posture information, and/or comprise a corresponding relation of the coordinate range of the easy-to-dirty part of the inner surface of the lens and the glasses posture information, and/or comprise a corresponding relation of the coordinate range of the easy-to-dirty part of the side wall of the lens and the glasses posture information. For example, the pre-stored table includes three tables for the lens outer surface, inner surface and sidewall, respectively, and in this case, there may be a case where there is a repetitive coordinate range between the three pre-stored tables, but the region to which the repetitive coordinate range belongs (i.e., the outer surface or the inner surface or the sidewall) is different.
In this way, in the embodiment of the invention, under the condition that the pressing input of the target position of the lens of the intelligent glasses is received, the corresponding target area of the target position in the lens and the position coordinates of the target position are identified, and the corresponding relation between the position coordinates of the target position and the pre-stored position coordinates and the placement posture information of the target area is detected, so that whether the placement posture information matched with the position coordinates of the target position exists in the corresponding relation is determined, the placement posture information matched with the dirty target position can be acquired more accurately and efficiently, and the query efficiency of the intelligent glasses on the information of the correct placement posture is improved.
Alternatively, in the foregoing embodiments, the coordinate ranges in the position information in the preset correspondence may include a plurality of coordinate ranges generated by mesh-dividing the lens surface of the smart glasses in advance.
For example, the position information (or position coordinates) of the inner surface of the lens of the smart glasses includes a plurality of first coordinate ranges for the inner surface generated by first reticulating the inner surface of the lens of the smart glasses in advance; the position information (or position coordinates) of the outer surface includes a plurality of second coordinate ranges for the outer surface generated by performing a second mesh division on the outer surface of the lens of the smart glasses in advance.
The first coordinate range and the second coordinate range may be all coordinate ranges generated by mesh division, or may be partial coordinate ranges.
In this way, in the embodiment of the present invention, the lens surface of the smart glasses may be pre-meshed to generate a plurality of coordinate ranges for the lens surface, where the plurality of coordinate ranges are all coordinate ranges of the easy-to-dirty part generated based on the mesh division, and in the actual operation scenario, the easy-to-dirty area of the glasses is generally a certain area range contacted by the finger or other components to the lens, so that the coordinate ranges formed in the mesh division manner are substantially matched with the size and the area of the easy-to-dirty position of the smart glasses in the actual application scenario, so that the target prompt information matched with the target position can be determined according to the corresponding relationship, and then the determined placement posture information and/or the commodity information for cleaning the lenses of the smart glasses are more accurate.
Alternatively, in one embodiment, the first coordinate range and the second coordinate range may be coordinate ranges of a portion selected from all coordinate ranges generated by mesh division. Since the first coordinate range and the second coordinate range are used for generating the corresponding relation, and the coordinate ranges in the corresponding relation are the coordinates of the parts of the smart glasses which are easy to be dirty, the corresponding relation is generated only by selecting the coordinate ranges of the parts which are easy to be dirty from a plurality of divided coordinate ranges.
Alternatively, in another embodiment, when the first reticulating and the second reticulating are performed, only the dirtying area of the lens may be reticulated, so that the generated first coordinate range and the second coordinate range are directly coordinate ranges of dirtying portions of the outer surface and the inner surface of the lens, respectively.
When the preset corresponding relation is generated in advance, the area of the outer surface and the area of the inner surface (or the area where the easily dirty area is located) of one lens (any lens) of the intelligent glasses can be respectively divided into a plurality of grids, each grid corresponds to a coordinate range, and a prompt message, such as gesture information for correctly placing the glasses, is set for each coordinate range.
Thus, the corresponding relation between the coordinate range and the prompt information can be formed by the mesh division of the surface of the lens.
In particular, in the above-described embodiment, the correspondence may include a correspondence (for example, one table) of the first coordinate range and the placement posture information, and a correspondence (for example, another table) of the second coordinate range and the placement posture information.
In addition, when the predetermined correspondence relationship is generated, the division rule used in performing the first mesh division and the second mesh division may be the same or different.
In this way, in the embodiment of the invention, the first mesh division and the second mesh division can be respectively performed on the inner surface and the outer surface of the lens of the intelligent glasses in advance to generate a plurality of first coordinate ranges for the inner surface and a plurality of second coordinate ranges for the outer surface, and because the first coordinate ranges and the second coordinate ranges are the coordinate ranges of the easy-to-dirty parts generated based on the mesh division, in an actual operation scene, the easy-to-dirty area of the glasses is generally a certain area contacted by fingers or other components on the lens, therefore, the coordinate ranges formed in the mesh division mode are basically matched with the easy-to-dirty position size and the easy-to-dirty area of the intelligent glasses in the actual application scene, and the determined target glasses posture can be more accurate when the target glasses posture is determined according to the corresponding relation.
Optionally, when executing the step 102, if the target prompt information matched with the target position does not exist in the corresponding relation between the preset position information and the prompt information, the target prompt information matched with the target position and sent by the server is received.
That is, if the target prompt information matched with the target position is not found in the preset correspondence, the target prompt information matched with the target position may be obtained from the server, and further, it may be determined that the target position satisfies a first preset condition, and the target prompt information is output.
For example, the position coordinates of the pressed target position may be queried in the position information (for example, the coordinate range) in the pre-stored correspondence, and if the position coordinates do not have the coordinate range including the position coordinates in the correspondence, it may be determined that the target prompt information matched with the target position is not queried in the correspondence, so in this step, the smart glasses may acquire the target prompt information matched with the target position from the server and output the target prompt information.
In one example, the smart glasses may send the target location (e.g., the target surface and/or the target coordinates) carried in the query request to a server matching the smart glasses, the server may determine target prompt information matching the target location through big data calculation and analysis, and return the target prompt information to the smart glasses, and then the smart glasses output the target prompt information obtained from the server.
Wherein, when the server determines the placement attitude information through big data processing analysis, the method can be realized by the following steps: the server can receive and store the corresponding relation between the newly-added position information and the prompt information, which are reported by each intelligent glasses side and correspond to the corresponding relation preset by the intelligent glasses manufacturer, when the server receives the query request of the intelligent glasses of the embodiment, the server can search whether the target prompt information matched with the target position exists in the newly-added corresponding relation stored locally at the server side, and if so, the target prompt information is returned; if not, the server may recommend a target prompt message to the smart glasses according to a second preset condition (wherein the recommended placement posture information of the smart glasses may be a more general posture that does not stain lenses of most types of smart glasses, or a lens structure is determined based on a model (or type) of the smart glasses, and then the recommended placement posture information that does not stain lenses of the smart glasses according to the lens structure may be general commodity information of various types of smart glasses).
In this way, in the embodiment of the invention, when the target prompt information matched with the target position is queried in the corresponding relation between the locally preset position information and the prompt information, the target prompt information can be directly output, so that the problems of long time consumption, low efficiency and high delay of recommending the prompt information caused by requesting the target prompt information from the server can be saved; and under the condition that the target prompt information matched with the target position is not queried in the corresponding relation between the locally pre-stored position information and the prompt information, the intelligent glasses can acquire the prompt information matched with the target position from the server side and output the prompt information, so that the reliability of acquiring the target prompt information can be improved, namely, even if the corresponding target prompt information is not locally stored, the target prompt information can be acquired from the server, and accordingly, the placing posture information of the glasses for preventing the lenses of the intelligent glasses from being dirty and/or commodity information for cleaning the lenses can be output to users under various conditions.
It should be noted that, the present invention can flexibly select whether to obtain the target prompt information from the local or the server according to the situation of the locally stored correspondence.
Optionally, after receiving the target prompt information that matches the target position and sent by the server, the method according to the embodiment of the present invention may further include: and adding the target position and the target prompt information as a group of corresponding relations to the preset corresponding relations between the position information and the prompt information.
That is, since the location information matched with the target location is not stored locally, and the target prompt information matched with the target location is not stored, the target location acquired from the server and the target prompt information are added as a set of corresponding relations to the corresponding relations between the pre-stored location information and the prompt information, so that the corresponding relations stored locally can be enriched, and the target prompt information can be acquired from the local in time and recommended to the user when the target location is touched again next time.
In the embodiment of the invention, because the preset corresponding relation of the manufacturer is standardized data through big data statistics, the relevance between the preset corresponding relation and the personalized demand of the user and the use habit of the intelligent glasses is weaker, and therefore, if the target position which is soiled is not matched with the corresponding target prompt information in the preset corresponding relation in the process of using the intelligent glasses, the prompt information and the target position which are acquired from the server can be added into the corresponding relation between the local preset position information and the prompt information as a group of corresponding relation, and the locally stored corresponding relation can be updated based on the use habit of the user on the intelligent glasses.
Optionally, when the correspondence of the smart glasses side to the local storage is updated based on the usage habit of the user to the smart glasses, the method may also be implemented as follows:
when the target prompt information matched with the target position is not queried in the corresponding relation, the intelligent glasses can acquire posture data of the glasses through sensors of the glasses, for example, various types of sensors are arranged at positions of the glasses legs, the glasses frame, the glasses drag and the like, the posture data of the glasses are generated by combining the pressure of the sensors, the infrared parameters and the like, the placement posture information matched with the posture data (for example, correct placement posture which does not cause the pollution of the glasses or the damage of the glasses legs) is determined based on the posture data (which can indicate which position of the glasses is polluted), and the placement posture information and the target position are updated into a local corresponding relation as a group of corresponding relations.
For example, the glasses legs of the intelligent glasses are provided with pressure sensors, and when the glasses legs of the intelligent glasses detect that heavy objects are pressed on the intelligent glasses, the intelligent glasses are prompted to be in an environment which is easy to damage, and a user is recommended to put the glasses into the glasses box.
Note that, in the above embodiment, the position coordinates of the target position (i.e., the target coordinates) may be one coordinate point or one coordinate range. When a coordinate point is located, preferably, a coordinate range surrounding the coordinate point is generated based on the coordinate point (for example, the coordinate point is used as an origin, a coordinate range is generated according to a preset radius), and the generated coordinate range and the target prompt information are added into the corresponding relation between the preset position information and the prompt information as a group of corresponding relations, so that when the position of the lens which is polluted next time is near the coordinate point of the target position, the target prompt information can be timely acquired from the local corresponding relation, the same target prompt information is prevented from being acquired again from the server, the information prompt efficiency is improved, and the information processing resources are saved.
Alternatively, when receiving the target prompt information that matches the target position and is sent by the server, if the target prompt information includes commodity information for cleaning the lenses of the smart glasses, a cleaning commodity request may be sent to the server, the cleaning commodity request may include information such as identification information (e.g., type information) of the smart glasses, and/or identification information (e.g., model number or commodity parameter) of the lenses, etc., and after receiving the request, the server may acquire commodity information suitable for cleaning the lenses of the smart glasses based on the identification information of the smart glasses, and/or the identification information of the lenses, and return the commodity information to the smart glasses, and then the smart glasses output the commodity information acquired from the server.
The merchandise information may be a purchase link for the merchandise, and the type of merchandise for cleaning the lens includes, but is not limited to, cleaning agents, cleaning cloths, and the like.
The commodity information may be output in the form of a display on a screen of the smart glasses.
Referring to fig. 2, a block diagram of smart glasses of an embodiment of the present invention is shown. The intelligent glasses provided by the embodiment of the invention can realize details of the information prompting method in the embodiment and achieve the same effect.
The smart glasses shown in fig. 2 include:
a receiving module 21 for receiving a first input to the smart glasses;
an output module 22, configured to respond to the first input, and output a target prompt message when a target position of the first input meets a first preset condition;
the target prompt information comprises placement posture information of the intelligent glasses, and at least one of commodity information of the lenses of the intelligent glasses is cleaned.
Optionally, the output module 22 is specifically configured to:
responding to the first input, and outputting target prompt information matched with the target position if the target prompt information exists in the corresponding relation between the preset position information and the prompt information.
Optionally, the position information includes at least one of a lens surface and a coordinate range, and the target position includes at least one of a target surface and a target coordinate of the lens;
the output module is specifically configured to:
if the corresponding relation between the preset position information and the prompt information has the same lens surface as the target surface, outputting target prompt information matched with the target position information in the corresponding relation;
Or if the coordinate range in the corresponding relation between the preset position information and the prompt information comprises the target coordinate, outputting target prompt information matched with the target position information in the corresponding relation.
Optionally, the coordinate ranges in the position information include a plurality of coordinate ranges generated by reticulating the lens surface of the smart glasses in advance.
Optionally, the receiving module 21 includes:
the first receiving sub-module is used for receiving the target prompt information matched with the target position and sent by the server if the target prompt information matched with the target position does not exist in the corresponding relation between the preset position information and the prompt information.
The intelligent glasses provided by the embodiment of the invention can realize each process of realizing the intelligent glasses in the embodiment of the method, and in order to avoid repetition, the description is omitted.
Through the module, the intelligent glasses are used for receiving the first input of the target positions of the lenses of the intelligent glasses, outputting the placing gesture information of the intelligent glasses matched with the target positions under the condition that the target positions of the first input meet the first preset condition, and/or cleaning the commodity information of the lenses of the intelligent glasses, and prompting the placing gesture information of the intelligent glasses of a user when the target positions of the intelligent glasses are in a dirty environment, and/or timely recommending the commodity of the lenses of the intelligent glasses relevant to the user. According to the method provided by the embodiment of the invention, under the condition that the lenses are stained, the proper placement of the glasses can be prompted to avoid the stain postures of the lenses of the glasses, and/or commodity information recommended to a user for cleaning the lenses is/are realized, so that the automatic prompt of the posture placement of the lenses of the intelligent glasses and/or maintenance information of cleaning the commodities is realized, and the problem that the lenses are stained due to the fact that the placement postures of the intelligent glasses are not aligned in time is difficult to find when the user subjectively maintains the intelligent glasses in the related art is solved, and the maintenance effect on the intelligent glasses can be improved.
Fig. 3 is a schematic hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power source 411. Those skilled in the art will appreciate that the electronic device structure shown in fig. 3 does not constitute a limitation of the electronic device, and the electronic device may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components. In an embodiment of the invention, the electronic device comprises a wearable device, wherein the wearable device can be smart glasses or smart watches.
A sensor 405 for receiving a first input to the smart glasses;
the processor 410 is configured to respond to the first input, and output a target prompt message if a target position of the first input meets a first preset condition; the target prompt information comprises placement posture information of the intelligent glasses, and at least one of commodity information of the lenses of the intelligent glasses is cleaned.
Taking electronic equipment as an example of intelligent glasses, in the embodiment of the invention, by receiving a first input of a target position of a lens of the intelligent glasses and outputting placement posture information of the intelligent glasses matched with the target position and/or cleaning commodity information of the lens of the intelligent glasses under the condition that the target position of the first input meets a first preset condition, the placement posture information of the intelligent glasses can be prompted to a user in an easily dirty environment, and/or timely recommended to the user related commodity for cleaning the lens of the intelligent glasses. According to the method provided by the embodiment of the invention, under the condition that the lenses are stained, the proper placement of the glasses can be prompted to avoid the stain postures of the lenses of the glasses, and/or commodity information recommended to a user for cleaning the lenses is/are realized, so that the automatic prompt of the posture placement of the lenses of the intelligent glasses and/or maintenance information of cleaning the commodities is realized, and the problem that the lenses are stained due to the fact that the placement postures of the intelligent glasses are not aligned in time is difficult to find when the user subjectively maintains the intelligent glasses in the related art is solved, and the maintenance effect on the intelligent glasses can be improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, receiving downlink data from a base station and then processing the received downlink data by the processor 410; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 401 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 402, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 400. The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive an audio or video signal. The input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphics processor 4041 may be stored in memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 401 in the case of a telephone call mode.
The electronic device 400 also includes at least one sensor 405, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 4061 and/or the backlight when the electronic device 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 405 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 406 is used to display information input by a user or information provided to the user. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 4071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 410, and receives and executes commands sent from the processor 410. In addition, the touch panel 4071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 407 may include other input devices 4072 in addition to the touch panel 4071. In particular, other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 4071 may be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 410 to determine the type of touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of touch event. Although in fig. 3, the touch panel 4071 and the display panel 4061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 4071 may be integrated with the display panel 4061 to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 408 is an interface to which an external device is connected to the electronic apparatus 400. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
Memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 409 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 409 and invoking data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may also include a power supply 411 (e.g., a battery) for powering the various components, and preferably the power supply 411 may be logically connected to the processor 410 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 400 includes some functional modules, which are not shown, and are not described herein.
Preferably, the embodiment of the present invention further provides an electronic device, including a processor 410, a memory 409, and a computer program stored in the memory 409 and capable of running on the processor 410, where the computer program when executed by the processor 410 implements each process of the above information prompting method embodiment, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above information prompting method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Claims (6)
1. An information prompting method applied to intelligent glasses is characterized by comprising the following steps:
receiving a first input to a lens of the smart glasses;
responding to the first input, and if the lens surface is the same as the target surface of the lens in the corresponding relation between the preset position information and the prompt information, outputting target prompt information matched with the target surface in the corresponding relation; or if the coordinate range in the corresponding relation between the preset position information and the prompt information comprises a target coordinate, outputting target prompt information matched with the target coordinate in the corresponding relation;
the target prompt information comprises placement posture information of the intelligent glasses, and at least one of commodity information of the lenses of the intelligent glasses is cleaned.
2. The method of claim 1, wherein the coordinate ranges in the location information include a plurality of coordinate ranges generated by reticulating the lens surface of the smart glasses in advance.
3. The method according to claim 1, wherein the method further comprises:
and if no target prompt information matched with the target position exists in the corresponding relation between the preset position information and the prompt information, receiving the target prompt information matched with the target position and sent by a server.
4. An intelligent eyeglass, the intelligent eyeglass comprising:
a receiving module for receiving a first input to a lens of the smart glasses;
the output module is used for responding to the first input, and outputting target prompt information matched with the target surface in the corresponding relation if the lens surface is the same as the target surface of the lens in the corresponding relation of the preset position information and the prompt information; or if the coordinate range in the corresponding relation between the preset position information and the prompt information comprises a target coordinate, outputting target prompt information matched with the target coordinate in the corresponding relation;
The target prompt information comprises placement posture information of the intelligent glasses, and at least one of commodity information of the lenses of the intelligent glasses is cleaned.
5. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the information presentation method as claimed in any one of claims 1 to 3.
6. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps in the information-prompting method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010291839.0A CN111475733B (en) | 2020-04-14 | 2020-04-14 | Information prompting method and intelligent glasses |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010291839.0A CN111475733B (en) | 2020-04-14 | 2020-04-14 | Information prompting method and intelligent glasses |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111475733A CN111475733A (en) | 2020-07-31 |
CN111475733B true CN111475733B (en) | 2024-04-12 |
Family
ID=71752095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010291839.0A Active CN111475733B (en) | 2020-04-14 | 2020-04-14 | Information prompting method and intelligent glasses |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111475733B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112927711A (en) * | 2021-01-20 | 2021-06-08 | 维沃移动通信有限公司 | Audio parameter adjusting method and device of intelligent glasses |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104597622A (en) * | 2015-02-15 | 2015-05-06 | 张晓亮 | Anti-dazzling glasses and method |
CN108416337A (en) * | 2018-04-28 | 2018-08-17 | 北京小米移动软件有限公司 | User is reminded to clean the method and device of camera lens |
CN109297975A (en) * | 2018-08-16 | 2019-02-01 | 奇酷互联网络科技(深圳)有限公司 | Mobile terminal and detection method, storage device |
CN109960032A (en) * | 2017-12-22 | 2019-07-02 | 托普瑞德(无锡)设计顾问有限公司 | A kind of dust removal AR intelligent glasses based on solar energy |
CN209560698U (en) * | 2019-01-31 | 2019-10-29 | 上海市黄浦区董家渡路第二小学 | Glasses correct placement alarm set |
-
2020
- 2020-04-14 CN CN202010291839.0A patent/CN111475733B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104597622A (en) * | 2015-02-15 | 2015-05-06 | 张晓亮 | Anti-dazzling glasses and method |
CN109960032A (en) * | 2017-12-22 | 2019-07-02 | 托普瑞德(无锡)设计顾问有限公司 | A kind of dust removal AR intelligent glasses based on solar energy |
CN108416337A (en) * | 2018-04-28 | 2018-08-17 | 北京小米移动软件有限公司 | User is reminded to clean the method and device of camera lens |
CN109297975A (en) * | 2018-08-16 | 2019-02-01 | 奇酷互联网络科技(深圳)有限公司 | Mobile terminal and detection method, storage device |
CN209560698U (en) * | 2019-01-31 | 2019-10-29 | 上海市黄浦区董家渡路第二小学 | Glasses correct placement alarm set |
Also Published As
Publication number | Publication date |
---|---|
CN111475733A (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105867751B (en) | Operation information processing method and device | |
CN107580147B (en) | Management method of notification message and mobile terminal | |
CN109409244B (en) | Output method of object placement scheme and mobile terminal | |
CN109871174B (en) | Virtual key display method and mobile terminal | |
CN108415652A (en) | A kind of text handling method and mobile terminal | |
CN107728923B (en) | Operation processing method and mobile terminal | |
CN109639863B (en) | Voice processing method and device | |
CN110531915B (en) | Screen operation method and terminal equipment | |
CN109343788B (en) | Operation control method of mobile terminal and mobile terminal | |
CN109710349B (en) | Screen capturing method and mobile terminal | |
CN109343693B (en) | Brightness adjusting method and terminal equipment | |
CN108196815B (en) | Method for adjusting call sound and mobile terminal | |
CN108874906B (en) | Information recommendation method and terminal | |
CN111444425B (en) | Information pushing method, electronic equipment and medium | |
CN110096203B (en) | Screenshot method and mobile terminal | |
CN110971510A (en) | Message processing method and electronic equipment | |
CN111130989A (en) | Information display and sending method and electronic equipment | |
CN108196781B (en) | Interface display method and mobile terminal | |
CN110795002A (en) | Screenshot method and terminal equipment | |
CN104915625B (en) | A kind of method and device of recognition of face | |
CN111061446A (en) | Display method and electronic equipment | |
CN108322897B (en) | Card package meal combination method and device | |
CN110309003B (en) | Information prompting method and mobile terminal | |
CN109982273B (en) | Information reply method and mobile terminal | |
CN110784394A (en) | Prompting method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |