CN104932782A - Information processing method and apparatus and smart glasses - Google Patents

Information processing method and apparatus and smart glasses Download PDF

Info

Publication number
CN104932782A
CN104932782A CN201410103435.9A CN201410103435A CN104932782A CN 104932782 A CN104932782 A CN 104932782A CN 201410103435 A CN201410103435 A CN 201410103435A CN 104932782 A CN104932782 A CN 104932782A
Authority
CN
China
Prior art keywords
touch
user
glasses
touch operation
preset instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410103435.9A
Other languages
Chinese (zh)
Inventor
邵翔
李琦
李佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410103435.9A priority Critical patent/CN104932782A/en
Publication of CN104932782A publication Critical patent/CN104932782A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present invention discloses an information processing method and apparatus and smart glasses. The glasses at least comprise a spectacle frame; and the spectacle frame can sense a touch control operation. The method comprises: acquiring a first operation input by a user; based on the first operation, triggering the spectacle frame to be in an activated state; detecting a first touch control operation input by the user by the spectacle frame; judging whether the first touch control operation is matched with a touch control operation of a preset instruction and obtaining a first judgement result; and when the first judgement result shows that the first touch control operation is matched with the touch control operation of the preset instruction, executing the processing process corresponding to the preset instruction. The present invention also discloses the information processing apparatus and the smart glasses. By the technical scheme provided by the present invention, the user can more rapidly and conveniently use the smart glasses, so that user experience is promoted.

Description

Information processing method and device and intelligent glasses
Technical Field
The present invention relates to electronic technologies, and in particular, to an information processing method and apparatus, and smart glasses.
Background
With the continuous development of science and technology, electronic technology has also gained rapid development, and the variety of electronic products is also more and more, and people also enjoy various conveniences brought by the development of science and technology. People can now access various types of electronic devices, such as: wearable electronic equipment enjoys the comfortable life brought along with the development of science and technology.
Wearable electronic equipment plays more and more important effect like intelligent wrist-watch, intelligent glasses etc. in people's life, along with the development of science and technology, people add more and more functions on wearable electronic equipment, installed a lot of applications even at this type of electronic equipment, how can use these functions and application more swiftly conveniently, become the problem that needs to solve urgently.
Disclosure of Invention
In view of this, embodiments of the present invention provide an information processing method and apparatus, and smart glasses to solve the problems in the prior art, so that a user can use the smart glasses more quickly and conveniently, thereby improving user experience.
The technical scheme of the embodiment of the invention is realized as follows:
an information processing method is applied to intelligent glasses, the glasses at least comprise a glass frame, and the glass frame can sense touch operation, and the method comprises the following steps:
acquiring a first operation input by a user;
triggering the mirror frame to be in an activated state based on the first operation;
detecting a first touch operation input by a user through the mirror frame;
judging whether the first touch operation is matched with the touch operation of a preset instruction or not to obtain a first judgment result;
and when the first judgment result shows that the first touch operation is matched with the touch operation of a preset instruction, executing a processing process corresponding to the preset instruction.
Preferably, the method further comprises:
acquiring a second operation input by the user;
triggering the mirror frame to be in a closed state based on the second operation, wherein the mirror frame cannot detect the touch operation of the user in the closed state.
Preferably, the determining whether the first touch operation is matched with a touch operation of a preset instruction to obtain a first determination result includes:
determining a touch type and a touch position based on the first touch operation;
and judging whether the touch position has a corresponding preset instruction or not according to the corresponding relation between the preset instruction in the database corresponding to the touch type and the touch position to obtain a first judgment result.
Preferably, the first operation of acquiring the user input includes:
detecting whether touch operation meeting a first preset condition occurs, and determining the touch operation as a first operation input by a user when the touch operation meets the first preset condition; or,
detecting whether a key operation meeting a second preset condition occurs, and determining to acquire a first operation input by a user when the key operation meets the second preset condition; or,
detecting whether the glasses generate posture change meeting a third preset condition; when the glasses generate the posture change meeting the third preset condition, determining the glasses to be a first operation input by a user; or,
detecting whether a voice operation meeting a fourth preset condition occurs; and when the voice operation meets the fourth preset condition, determining to acquire the first operation input by the user.
Smart eyewear, the eyewear comprising a frame; the glasses also comprise a microprocessor arranged on the glasses frame or the glasses legs and a power supply at least supplying power to the microprocessor; wherein,
the mirror frame is made of an induction material capable of detecting touch operation of a user, and when the mirror frame is in an activated state, the mirror frame is used for detecting first touch operation of the user, responding to the first touch operation, generating a first touch signal and sending the first touch signal to the microprocessor;
the microprocessor is used for receiving a first touch signal sent by the mirror frame, judging whether the first touch signal is matched with a touch signal of a preset instruction or not, and obtaining a second judgment result; and when the second judgment result shows that the first touch signal is matched with the touch signal of a preset instruction, controlling a corresponding execution unit to execute a processing process corresponding to the preset instruction.
Preferably, the glasses further include a nose pad, the nose pad is made of an inductive material capable of detecting a touch operation of a user, and is configured to detect a second touch operation of the user, generate a second touch signal in response to the second touch operation, and send the second touch signal to the microprocessor;
correspondingly, the microprocessor is used for receiving a second touch signal sent by the mirror frame, and judging whether the second touch signal is matched with a touch signal of a preset instruction or not to obtain a third judgment result; and when the third judgment result shows that the second touch signal is matched with the touch signal of the preset instruction, controlling the corresponding execution unit to execute the processing process corresponding to the preset instruction.
Preferably, the glasses further comprise glasses legs and nose pads, and the glasses legs and the nose pads are both made of sensing materials capable of detecting touch operation of a user;
the nose support or the glasses legs are also used for acquiring a second operation input by the user; responding to the first operation, generating a first signal and sending the first signal to the microprocessor;
correspondingly, the microprocessor is also used for controlling the mirror frame to be in a closed state, and the mirror frame cannot detect the touch operation of a user in the closed state.
Preferably, the first touch signal includes a touch type and a touch position;
correspondingly, the microprocessor is further configured to determine whether a corresponding preset instruction exists in the instruction library according to the touch type and the touch position, so as to obtain a first determination result.
Preferably, the power supply comprises two batteries, and the two batteries are symmetrically arranged at the tail ends of the two temples respectively.
An information processing device is applied to intelligent glasses, and the glasses at least comprise a glasses frame; the picture frame can respond to touch-control operation, the device includes first acquisition unit, first trigger element, first detecting element, judges unit and execution unit, wherein:
the first acquisition unit is used for acquiring a first operation input by a user;
the first trigger unit is used for triggering the mirror frame to be in an activated state based on the first operation;
the first detection unit is used for detecting a first touch operation input by a user through the mirror frame;
the judging unit is used for judging whether the first touch operation is matched with the touch operation of a preset instruction or not to obtain a first judging result;
the execution unit is configured to execute a processing procedure corresponding to a preset instruction when the first determination result indicates that the first touch operation matches a touch operation of the preset instruction.
In the embodiment of the invention, a first operation input by a user is acquired; triggering the mirror frame to be in an activated state based on the first operation; detecting a first touch operation input by a user through the mirror frame; judging whether the first touch operation is matched with the touch operation of a preset instruction or not to obtain a first judgment result; when the first judgment result shows that the first touch operation is matched with the touch operation of a preset instruction, executing a processing process corresponding to the preset instruction; so, can use intelligent glasses more swiftly conveniently to promote user experience.
Drawings
FIG. 1 is a schematic flow chart illustrating an implementation of an information processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an implementation of a second information processing method according to an embodiment of the present invention;
FIG. 3-1 is a schematic diagram of a structure of a third information processing apparatus according to an embodiment of the present invention;
FIG. 3-2 is a schematic diagram of a structure of a determining unit according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a composition structure of a fourth information processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further elaborated below with reference to the drawings and the specific embodiments.
Example one
The information processing method provided by the embodiment of the invention is applied to intelligent glasses, and the glasses at least comprise a glasses frame; fig. 1 is a schematic flow chart illustrating an implementation of an information processing method according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
step 101, acquiring a first operation input by a user;
here, the lens frame is made of an inductive material, such as a capacitive material, an inductive material, or the like, which can detect a touch operation of a user, so that the lens frame can sense the touch operation; when a user touches the glasses frame by a specific gesture, the capacitance material generates an electric signal matched with the input gesture and transmits the matched electric signal to the processor, and the processor searches the instruction library according to the type of the electric signal and further performs a processing process corresponding to the input gesture.
102, triggering the glasses frame to be in an activated state based on the first operation;
step 103, detecting a first touch operation input by a user through the mirror frame;
step 104, judging whether the first touch operation is matched with the touch operation of a preset instruction or not to obtain a first judgment result;
and 105, when the first judgment result shows that the first touch operation is matched with the touch operation of a preset instruction, executing a processing process corresponding to the preset instruction.
Here, the preset instruction may be defined by a manufacturer before the smart glasses leave a factory, or may be defined by a user or a user; the preset instructions are stored in an instruction library, and the preset instructions can be used for completing the following functions, for example: functions such as map navigation, interaction with friends, taking photos and videos, performing video calls with friends, and realizing wireless network access through a mobile communication network; in terms of taking a photograph, this may in turn include: the method comprises the steps of starting a camera to shoot and store the shot image in a local place, starting the camera to shoot and uploading the shot image to a social network site such as a microblog and the like, and calling a search engine to search keywords such as searching nearby restaurants and the like.
In an embodiment of the present invention, the 104 comprises C1 and C2, wherein:
step C1, determining a touch type and a touch position based on the first touch operation;
step C2, determining whether the touch position has a corresponding preset instruction according to a corresponding relationship between the preset instruction and the touch position in the database corresponding to the touch type, and obtaining a first determination result.
Here, the touch operation of the preset instruction may include the following types: single-finger touch, single-finger sliding, two-finger sliding touch, and two-finger touch. Each type of touch operation may include a variety of operations, and for example, single-finger sliding includes single-finger leftward sliding and single-finger rightward sliding;
here, the touch position may include a left frame, a nose pad, and a right frame, where the left frame may include a left upper frame and a left lower frame, and the right frame may include a right upper frame and a right lower frame;
in the embodiment of the present invention, the step 101 includes a step a1 and a step a2, where:
step A1, detecting whether touch operation meeting a first predetermined condition occurs; correspondingly, step a2, when the operation meets the first predetermined condition, determining to be a first operation for acquiring user input; or,
step A1, detecting whether a key operation meeting a second preset condition occurs; correspondingly, step a2, when the key operation meets the second predetermined condition, determining that the key operation is a first operation input by a user; or,
step A1, detecting whether the glasses have posture change meeting a third preset condition; correspondingly, step a2, when the glasses have a posture change meeting the third predetermined condition, determining to obtain a first operation input by the user; or,
step A1, detecting whether a voice operation meeting a fourth preset condition occurs; correspondingly, step a2, when the voice operation satisfies the fourth predetermined condition, determines to acquire the first operation input by the user.
Here, the touch operation satisfies the first predetermined condition, which may be implemented by, but not limited to, the following forms: whether the touch operation is a single-click touch operation or not; or whether the touch operation is a double-click touch operation or not; or whether the touch operation is a three-finger touch operation or not; or whether the touch operation is a five-finger touch operation.
Here, the key operation satisfying the second predetermined condition may be implemented by, but not limited to, the following form: whether the key operation is long pressing of a certain key, for example, pressing of a function key such as an HOME key for more than 2 seconds; or whether the key operation is a combination of certain keys, for example, a combination of function keys such as a HOME key and a BACK key.
Here, the posture change may be a posture change of the smart glasses, such as that the user may shake the brain, wherein the third predetermined condition may be, for example, that the brain is shaken twice, and may be implemented by using an acceleration sensor in a specific implementation process. In addition, the first operation may also be an operation of voice control, and those skilled in the art may implement the operation by using various existing voice control techniques, which is not described herein again.
According to the technical scheme provided by the embodiment of the invention, for a person wearing glasses, the probability of touching the glasses frame is higher than that of touching other parts of the glasses, and the glasses frame is more convenient and natural to move back than other parts of the glasses such as glasses when being touched.
Example two
The information processing method provided by the embodiment of the invention is applied to intelligent glasses, and the glasses at least comprise a glasses frame; the lens frame can sense touch operation, fig. 2 is a schematic flow chart of an implementation of a second information processing method according to an embodiment of the present invention, and as shown in fig. 2, the method includes:
step 201, acquiring a first operation input by a user;
step 202, triggering the mirror frame to be in an activated state based on the first operation;
step 203, detecting a first touch operation input by a user through the mirror frame;
step 204, judging whether the first touch operation is matched with a touch operation of a preset instruction, and obtaining a first judgment result;
step 205, when the first determination result indicates that the first touch operation matches the touch operation of a preset instruction, executing a processing procedure corresponding to the preset instruction;
step 206, acquiring a second operation input by the user;
step 207, based on the second operation, triggering the lens frame to be in a closed state, wherein the lens frame cannot detect the touch operation of the user in the closed state.
Here, the off state may be a deactivated state.
In an embodiment of the present invention, the 204 comprises C1 and C2, wherein:
step C1, determining a touch type and a touch position based on the first touch operation;
step C2, determining whether the touch position has a corresponding preset instruction according to a corresponding relationship between the preset instruction and the touch position in the database corresponding to the touch type, and obtaining a first determination result.
In an embodiment of the present invention, the step 207 includes a step a1 and a step a2, where:
step A1, detecting whether a touch operation meeting a fifth predetermined condition occurs; correspondingly, step a2, when the operation satisfies the fifth predetermined condition, determining to be a second operation for acquiring user input; or,
step A1, detecting whether a key operation meeting a sixth preset condition occurs; correspondingly, step a2, when the key operation meets the sixth predetermined condition, determining to obtain a second operation input by the user; or,
step A1, detecting whether the glasses have posture change meeting a seventh preset condition; correspondingly, step a2, when the glasses have a posture change meeting the seventh predetermined condition, determining to obtain a second operation input by the user; or,
step A1, detecting whether a voice operation meeting an eighth preset condition occurs; correspondingly, step a2, when the voice operation satisfies the eighth predetermined condition, determines to acquire the second operation input by the user.
Here, in step 207, the fifth predetermined condition may be the same as or different from the first predetermined condition, the sixth predetermined condition may be the same as or different from the second predetermined condition, the seventh predetermined condition may be the same as or different from the third predetermined condition, and the eighth predetermined condition may be the same as or different from the fourth predetermined condition; step 207 is similar to step 101 and therefore will not be described in detail.
EXAMPLE III
The embodiment of the invention provides an information processing device, which is applied to intelligent glasses, wherein the glasses at least comprise a glasses frame; the lens frame can sense touch operation, fig. 3-1 is a schematic view of a composition structure of an information processing apparatus according to an embodiment of the present invention, and as shown in fig. 3-1, the apparatus includes a first obtaining unit 31, a first triggering unit 32, a first detecting unit 33, a determining unit 34, and an executing unit 35, where:
the first acquiring unit 31 is used for acquiring a first operation input by a user;
here, the lens frame is made of an inductive material, such as a capacitive material, an inductive material, or the like, which can detect a touch operation of a user, so that the lens frame can sense the touch operation; when a user touches the glasses frame by a specific gesture, the capacitance material generates an electric signal matched with the input gesture and transmits the matched electric signal to the processor, and the processor searches the instruction library according to the type of the electric signal and further performs a processing process corresponding to the input gesture.
The first trigger unit 32 is configured to trigger the lens frame to be in an activated state based on the first operation;
the first detection unit 33 is configured to detect a first touch operation input by a user through the lens frame;
the determining unit 34 is configured to determine whether the first touch operation matches a touch operation of a preset instruction, so as to obtain a first determination result;
the execution unit 35 is configured to execute a processing procedure corresponding to a preset instruction when the first determination result indicates that the first touch operation matches a touch operation of the preset instruction.
Here, the preset instruction may be defined by a manufacturer before the smart glasses leave a factory, or may be defined by a user or a user; the preset instructions are stored in an instruction library, and the preset instructions can be used for completing the following functions, for example: functions such as map navigation, interaction with friends, taking photos and videos, performing video calls with friends, and realizing wireless network access through a mobile communication network; in terms of taking a photograph, this may in turn include: the method comprises the steps of starting a camera to shoot and store the shot image in a local place, starting the camera to shoot and uploading the shot image to a social network site such as a microblog and the like, and calling a search engine to search keywords such as searching nearby restaurants and the like.
In this embodiment of the present invention, as shown in fig. 3-2, the determining unit 34 includes a first determining module 341 and a determining module 342, where:
the first determining module 341 is configured to determine a touch type and a touch position based on the first touch operation;
the determining module 342 is configured to determine whether a corresponding preset instruction exists at the touch position according to a corresponding relationship between the preset instruction and the touch position in the database corresponding to the touch type, so as to obtain a first determination result.
Here, the touch operation of the preset instruction may include the following types: single-finger touch, single-finger sliding, two-finger sliding touch, and two-finger touch. Each type of touch operation may include a variety of operations, and for example, single-finger sliding includes single-finger leftward sliding and single-finger rightward sliding;
here, the touch position may include a left frame, a nose pad, and a right frame, where the left frame may include a left upper frame and a left lower frame, and the right frame may include a right upper frame and a right lower frame;
in this embodiment of the present invention, the first obtaining unit 31 includes a detecting module and a second determining module, where:
the detection module is used for detecting whether touch operation meeting a first preset condition occurs or not; correspondingly, the second determining module is configured to determine that the first operation input by the user is obtained when the operation meets the first predetermined condition; or,
the detection module is used for detecting whether key operation meeting a second preset condition occurs or not; correspondingly, the second determining module is configured to determine that the first operation input by the user is obtained when the key operation meets the second predetermined condition; or,
the detection module is used for detecting whether the glasses have posture change meeting a third preset condition; correspondingly, the second determining module is used for determining to obtain the first operation input by the user when the posture change of the glasses meets the third preset condition; or,
the detection module is used for detecting whether voice operation meeting a fourth preset condition occurs or not; correspondingly, the second determining module is configured to determine the first operation input by the user as the first operation when the voice operation satisfies the fourth predetermined condition.
Here, the touch operation satisfies the first predetermined condition, which may be implemented by, but not limited to, the following forms: whether the touch operation is a single-click touch operation or not; or whether the touch operation is a double-click touch operation or not; or whether the touch operation is a three-finger touch operation or not; or whether the touch operation is a five-finger touch operation.
Here, the key operation satisfying the second predetermined condition may be implemented by, but not limited to, the following form: whether the key operation is long pressing of a certain key, for example, pressing of a function key such as an HOME key for more than 2 seconds; or whether the key operation is a combination of certain keys, for example, a combination of function keys such as a HOME key and a BACK key.
Here, the posture change may be a posture change of the smart glasses, such as that the user may shake the brain, wherein the third predetermined condition may be, for example, that the brain is shaken twice, and may be implemented by using an acceleration sensor in a specific implementation process. In addition, the first operation may also be an operation of voice control, and those skilled in the art may implement the operation by using various existing voice control techniques, which is not described herein again.
According to the technical scheme provided by the embodiment of the invention, for a person wearing glasses, the probability of touching the glasses frame is higher than that of touching other parts of the glasses, and the glasses frame is more convenient and natural to move back than other parts of the glasses such as glasses when being touched.
Example four
Based on the third embodiment, the information processing device provided by the embodiment of the invention is applied to intelligent glasses, and the glasses at least comprise a glasses frame; fig. 4 is a schematic view of a composition structure of a fourth information processing apparatus according to an embodiment of the present invention, and as shown in fig. 4, the apparatus includes a first obtaining unit 41, a first triggering unit 42, a first detecting unit 43, a determining unit 44, an executing unit 45, a second obtaining unit 46, and a second triggering unit 47, where:
the first obtaining unit 41 is configured to obtain a first operation input by a user;
the first trigger unit 42 is configured to trigger the lens frame to be in an activated state based on the first operation;
the first detection unit 43 is configured to detect a first touch operation input by a user through the lens frame;
the determining unit 44 is configured to determine whether the first touch operation matches a touch operation of a preset instruction, so as to obtain a first determination result;
the execution unit 45 is configured to execute a processing procedure corresponding to a preset instruction when the first determination result indicates that the first touch operation matches a touch operation of the preset instruction;
the second obtaining unit 46 is configured to obtain a second operation input by the user;
the second triggering unit 47 is configured to trigger the lens frame to be in a closed state based on the second operation, where the lens frame cannot detect a touch operation of a user in the closed state.
Here, the off state may be a deactivated state.
In this embodiment of the present invention, as shown in fig. 3-2, the determining unit 44 includes a first determining module 341 and a determining module 342, where:
the first determining module 341 is configured to determine a touch type and a touch position based on the first touch operation;
the determining module 342 is configured to determine whether a corresponding preset instruction exists at the touch position according to a corresponding relationship between the preset instruction and the touch position in the database corresponding to the touch type, so as to obtain a first determination result.
In this embodiment of the present invention, the first obtaining unit 41 includes a detecting module and a second determining module, where:
the detection module is used for detecting whether touch operation meeting a first preset condition occurs or not; correspondingly, the second determining module is configured to determine that the first operation input by the user is obtained when the operation meets the first predetermined condition; or,
the detection module is used for detecting whether key operation meeting a second preset condition occurs or not; correspondingly, the second determining module is configured to determine that the first operation input by the user is obtained when the key operation meets the second predetermined condition; or,
the detection module is used for detecting whether the glasses have posture change meeting a third preset condition; correspondingly, the second determining module is used for determining to obtain the first operation input by the user when the posture change of the glasses meets the third preset condition; or,
the detection module is used for detecting whether voice operation meeting a fourth preset condition occurs or not; correspondingly, the second determining module is configured to determine the first operation input by the user as the first operation when the voice operation satisfies the fourth predetermined condition.
In the third and fourth embodiments of the present invention, the second obtaining unit is similar to the first obtaining unit, and the second triggering unit is similar to the first triggering unit; here, for convenience of description, the first acquisition unit and the first trigger unit are taken as examples to represent the second acquisition unit and the second trigger unit, respectively. The first acquiring unit may be a unit capable of sensing a user input operation located at another part of the glasses, for example, the first acquiring unit may be a microphone, and the microphone is configured to sense a voice of a user or a user, and generate a corresponding instruction according to the received voice; then sent to the processor, and the processor controls whether the mirror frame is in the activated state or the deactivated state. Thus, the first trigger unit may be a virtual unit implemented by the processor; correspondingly, the first detection unit is a lens frame, when the lens frame detects the touch operation of input, the input is converted into an electric signal and then transmitted to the processor; thus, the determination unit may also be a virtual unit implemented by a processor; and the execution unit may be some unit of entity, such as: a camera, a distance sensor, a light sensor, a microphone assembly, an activation sensor, a gyroscope, an accelerometer, a Positioning unit such as a Global Positioning System (GPS) based unit, or a virtual unit implemented by a processor such as a search module that invokes a search engine to complete a search function; those skilled in the art may define the corresponding execution unit according to the function to be completed by the preset instruction, and details are not described herein.
EXAMPLE five
Based on the first to the fourth embodiments, the embodiment of the invention provides intelligent glasses, which comprise a glasses frame; the glasses also comprise a microprocessor arranged on the glasses frame or the glasses legs and a power supply at least supplying power to the microprocessor; wherein,
the mirror frame is made of an induction material capable of detecting touch operation of a user, and when the mirror frame is in an activated state, the mirror frame is used for detecting first touch operation of the user, responding to the first touch operation, generating a first touch signal and sending the first touch signal to the microprocessor;
the microprocessor is used for receiving a first touch signal sent by the mirror frame, judging whether the first touch signal is matched with a touch signal of a preset instruction or not, and obtaining a second judgment result; and when the second judgment result shows that the first touch signal is matched with the touch signal of a preset instruction, controlling a corresponding execution unit to execute a processing process corresponding to the preset instruction.
In the embodiment of the present invention, the microprocessor may be a Central Processing Unit (CPU), a Micro Control Unit (MCU), or the like.
In the embodiment of the invention, the glasses further comprise a nose pad, wherein the nose pad is made of an induction material capable of detecting the touch operation of the user, is used for detecting the second touch operation of the user, responds to the second touch operation, generates a second touch signal and sends the second touch signal to the microprocessor;
correspondingly, the microprocessor is used for receiving a second touch signal sent by the mirror frame, and judging whether the second touch signal is matched with a touch signal of a preset instruction or not to obtain a third judgment result; and when the third judgment result shows that the second touch signal is matched with the touch signal of the preset instruction, controlling the corresponding execution unit to execute the processing process corresponding to the preset instruction.
In the embodiment of the invention, the glasses further comprise glasses legs and a nose support, wherein the glasses legs and the nose support are both made of induction materials capable of detecting touch operation of a user;
the nose support or the glasses legs are also used for acquiring a second operation input by the user; responding to the first operation, generating a first signal and sending the first signal to the microprocessor;
correspondingly, the microprocessor is also used for controlling the mirror frame to be in a closed state, and the mirror frame cannot detect the touch operation of a user in the closed state.
In an embodiment of the present invention, the first touch signal includes a touch type and a touch position;
correspondingly, the microprocessor is further configured to determine whether a corresponding preset instruction exists in the instruction library according to the touch type and the touch position, so as to obtain a first determination result.
In the embodiment of the invention, the power supply comprises two batteries which are symmetrically and respectively arranged at the tail ends of the two glasses legs.
In the embodiment of the invention, the glasses further comprise a display screen, and the display screen is arranged on at least one of the lenses.
In the embodiment of the invention, the glasses further comprise an antenna, and the antenna is arranged inside the glasses frame or the glasses legs.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An information processing method is applied to intelligent glasses, and the glasses at least comprise a glasses frame; the glasses frame can sense touch operation, and the method comprises the following steps:
acquiring a first operation input by a user;
triggering the mirror frame to be in an activated state based on the first operation;
detecting a first touch operation input by a user through the mirror frame;
judging whether the first touch operation is matched with the touch operation of a preset instruction or not to obtain a first judgment result;
and when the first judgment result shows that the first touch operation is matched with the touch operation of a preset instruction, executing a processing process corresponding to the preset instruction.
2. The method of claim 1, further comprising:
acquiring a second operation input by the user;
triggering the mirror frame to be in a closed state based on the second operation, wherein the mirror frame cannot detect the touch operation of the user in the closed state.
3. The method according to claim 1 or 2, wherein the determining whether the first touch operation matches a touch operation of a preset instruction to obtain a first determination result includes:
determining a touch type and a touch position based on the first touch operation;
and judging whether the touch position has a corresponding preset instruction or not according to the corresponding relation between the preset instruction in the database corresponding to the touch type and the touch position to obtain a first judgment result.
4. The method of claim 3, wherein the first operation of obtaining user input comprises:
detecting whether touch operation meeting a first preset condition occurs, and determining the touch operation as a first operation input by a user when the touch operation meets the first preset condition; or,
detecting whether a key operation meeting a second preset condition occurs, and determining to acquire a first operation input by a user when the key operation meets the second preset condition; or,
detecting whether the glasses generate posture change meeting a third preset condition; when the glasses generate the posture change meeting the third preset condition, determining the glasses to be a first operation input by a user; or,
detecting whether a voice operation meeting a fourth preset condition occurs; and when the voice operation meets the fourth preset condition, determining to acquire the first operation input by the user.
5. Smart eyewear, wherein the eyewear comprises a frame; the glasses also comprise a microprocessor arranged on the glasses frame or the glasses legs and a power supply at least supplying power to the microprocessor; wherein,
the mirror frame is made of an induction material capable of detecting touch operation of a user, and when the mirror frame is in an activated state, the mirror frame is used for detecting first touch operation of the user, responding to the first touch operation, generating a first touch signal and sending the first touch signal to the microprocessor;
the microprocessor is used for receiving a first touch signal sent by the mirror frame, judging whether the first touch signal is matched with a touch signal of a preset instruction or not, and obtaining a second judgment result; and when the second judgment result shows that the first touch signal is matched with the touch signal of a preset instruction, controlling a corresponding execution unit to execute a processing process corresponding to the preset instruction.
6. The eyeglasses according to claim 5, further comprising a nose pad made of a sensing material capable of detecting a touch operation of a user, wherein the nose pad is used for detecting a second touch operation of the user, generating a second touch signal in response to the second touch operation, and sending the second touch signal to the microprocessor;
correspondingly, the microprocessor is used for receiving a second touch signal sent by the mirror frame, and judging whether the second touch signal is matched with a touch signal of a preset instruction or not to obtain a third judgment result; and when the third judgment result shows that the second touch signal is matched with the touch signal of the preset instruction, controlling the corresponding execution unit to execute the processing process corresponding to the preset instruction.
7. The eyeglasses according to claim 5, further comprising temples and nose pads, wherein the temples and the nose pads are made of an inductive material capable of detecting a touch operation of a user;
the nose support or the glasses legs are also used for acquiring a second operation input by the user; responding to the first operation, generating a first signal and sending the first signal to the microprocessor;
correspondingly, the microprocessor is also used for controlling the mirror frame to be in a closed state, and the mirror frame cannot detect the touch operation of a user in the closed state.
8. The eyeglasses according to any one of claims 5 to 7, wherein the first touch signal comprises a touch type and a touch position;
correspondingly, the microprocessor is further configured to determine whether a corresponding preset instruction exists in the instruction library according to the touch type and the touch position, so as to obtain a first determination result.
9. The eyeglasses according to any one of claims 5 to 7, wherein said power source comprises two batteries, said batteries being symmetrically mounted on the ends of said temples, respectively.
10. An information processing device is applied to intelligent glasses, and the glasses at least comprise a glasses frame; the picture frame can respond to touch-control operation, the device includes first acquisition unit, first trigger element, first detecting element, judges unit and execution unit, wherein:
the first acquisition unit is used for acquiring a first operation input by a user;
the first trigger unit is used for triggering the mirror frame to be in an activated state based on the first operation;
the first detection unit is used for detecting a first touch operation input by a user through the mirror frame;
the judging unit is used for judging whether the first touch operation is matched with the touch operation of a preset instruction or not to obtain a first judging result;
the execution unit is configured to execute a processing procedure corresponding to a preset instruction when the first determination result indicates that the first touch operation matches a touch operation of the preset instruction.
CN201410103435.9A 2014-03-19 2014-03-19 Information processing method and apparatus and smart glasses Pending CN104932782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410103435.9A CN104932782A (en) 2014-03-19 2014-03-19 Information processing method and apparatus and smart glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410103435.9A CN104932782A (en) 2014-03-19 2014-03-19 Information processing method and apparatus and smart glasses

Publications (1)

Publication Number Publication Date
CN104932782A true CN104932782A (en) 2015-09-23

Family

ID=54119967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410103435.9A Pending CN104932782A (en) 2014-03-19 2014-03-19 Information processing method and apparatus and smart glasses

Country Status (1)

Country Link
CN (1) CN104932782A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105158931A (en) * 2015-09-28 2015-12-16 大连楼兰科技股份有限公司 Oil stain preventing method for parallel usage of touchpad and keys of intelligent glasses in the vehicle maintenance process
CN105204643A (en) * 2015-09-28 2015-12-30 大连楼兰科技股份有限公司 Gesture recognition method for intelligent glasses used in vehicle maintaining process
CN105223706A (en) * 2015-09-28 2016-01-06 大连楼兰科技股份有限公司 The method of vehicle degree of injury is judged for the intelligent glasses in vehicle repair and maintenance process
CN108761795A (en) * 2018-07-25 2018-11-06 Oppo广东移动通信有限公司 A kind of Wearable
CN110515206A (en) * 2019-08-19 2019-11-29 青岛海信电器股份有限公司 A kind of control method, control device and intelligent glasses
CN114594617A (en) * 2020-11-20 2022-06-07 成食科技股份有限公司 Intelligent glasses

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090009480A1 (en) * 2007-07-06 2009-01-08 Sony Ericsson Mobile Communications Ab Keypad with tactile touch glass
JP2011139456A (en) * 2010-01-04 2011-07-14 Samsung Electronics Co Ltd 3d glass driving method, and 3d glass and 3d display device using the same
CN203178577U (en) * 2013-03-30 2013-09-04 潍坊歌尔电子有限公司 Touch type three-dimensional (3D) glasses
CN103513910A (en) * 2012-06-29 2014-01-15 联想(北京)有限公司 Information processing method and device and electronic equipment
CN103513901A (en) * 2012-06-26 2014-01-15 联想(北京)有限公司 Information processing method and electronic device
CN103530038A (en) * 2013-10-23 2014-01-22 叶晨光 Program control method and device for head-mounted intelligent terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090009480A1 (en) * 2007-07-06 2009-01-08 Sony Ericsson Mobile Communications Ab Keypad with tactile touch glass
JP2011139456A (en) * 2010-01-04 2011-07-14 Samsung Electronics Co Ltd 3d glass driving method, and 3d glass and 3d display device using the same
CN103513901A (en) * 2012-06-26 2014-01-15 联想(北京)有限公司 Information processing method and electronic device
CN103513910A (en) * 2012-06-29 2014-01-15 联想(北京)有限公司 Information processing method and device and electronic equipment
CN203178577U (en) * 2013-03-30 2013-09-04 潍坊歌尔电子有限公司 Touch type three-dimensional (3D) glasses
CN103530038A (en) * 2013-10-23 2014-01-22 叶晨光 Program control method and device for head-mounted intelligent terminal

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105158931A (en) * 2015-09-28 2015-12-16 大连楼兰科技股份有限公司 Oil stain preventing method for parallel usage of touchpad and keys of intelligent glasses in the vehicle maintenance process
CN105204643A (en) * 2015-09-28 2015-12-30 大连楼兰科技股份有限公司 Gesture recognition method for intelligent glasses used in vehicle maintaining process
CN105223706A (en) * 2015-09-28 2016-01-06 大连楼兰科技股份有限公司 The method of vehicle degree of injury is judged for the intelligent glasses in vehicle repair and maintenance process
CN105223706B (en) * 2015-09-28 2018-03-30 大连楼兰科技股份有限公司 The method of vehicle damage degree is judged for the intelligent glasses during vehicle maintenance
CN108761795A (en) * 2018-07-25 2018-11-06 Oppo广东移动通信有限公司 A kind of Wearable
CN110515206A (en) * 2019-08-19 2019-11-29 青岛海信电器股份有限公司 A kind of control method, control device and intelligent glasses
CN114594617A (en) * 2020-11-20 2022-06-07 成食科技股份有限公司 Intelligent glasses

Similar Documents

Publication Publication Date Title
US10394328B2 (en) Feedback providing method and electronic device for supporting the same
JP6310556B2 (en) Screen control method and apparatus
US10495878B2 (en) Mobile terminal and controlling method thereof
CN105607696B (en) Method of controlling screen and electronic device for processing the same
KR102471977B1 (en) Method for displaying one or more virtual objects in a plurality of electronic devices, and an electronic device supporting the method
EP2759922B1 (en) Method of performing a function of a device based on motion of the device and device for performing the method
US20180329209A1 (en) Methods and systems of smart eyeglasses
AU2017293746B2 (en) Electronic device and operating method thereof
US10254847B2 (en) Device interaction with spatially aware gestures
CN108353161B (en) Electronic device, wearable device and method for controlling object displayed by electronic device
US10775840B2 (en) Mirror type display device and method for controlling same
CN104932782A (en) Information processing method and apparatus and smart glasses
EP2945043A1 (en) Eyewear-type terminal and method of controlling the same
KR102616430B1 (en) Eletronic device and method for determining a selection area based on pressure input of touch
KR20170019127A (en) Method for controlling according to state and electronic device thereof
CN113076025A (en) Wearable device
WO2018133681A1 (en) Method and device for sorting search results, server and storage medium
WO2018105955A2 (en) Method for displaying object and electronic device thereof
KR20180086639A (en) Electronic apparatus and controlling method thereof
KR20170022592A (en) Electronic apparatus and Method for transforming content thereof
KR102594847B1 (en) Apparatus and method for providing payment information
JP2022175629A (en) Information terminal system, method for controlling information terminal system, and program
EP2977726B1 (en) Apparatus and method for counting steps by detecting wrist steps
EP3513269B1 (en) Electronic device comprising electromagnetic interference sensor
KR102367162B1 (en) Method for configuring screen, electronic apparatus and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150923