CN104298350B - information processing method and wearable electronic equipment - Google Patents
information processing method and wearable electronic equipment Download PDFInfo
- Publication number
- CN104298350B CN104298350B CN201410510378.6A CN201410510378A CN104298350B CN 104298350 B CN104298350 B CN 104298350B CN 201410510378 A CN201410510378 A CN 201410510378A CN 104298350 B CN104298350 B CN 104298350B
- Authority
- CN
- China
- Prior art keywords
- image
- user
- electronic equipment
- electronic device
- lens
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses information processing methods and wearable electronic equipment, wherein the electronic equipment comprises a support, an image acquisition unit, a lens and a projection unit, the image acquisition unit, the lens and the projection unit are arranged on the support, the electronic equipment can maintain the relative position relation with the head of an electronic equipment user through the support, the information processing method comprises the steps of acquiring the motion track of an operation body by using the image acquisition unit when the electronic equipment user watches the th image of a target area through the lens, determining a second image corresponding to the motion track according to the motion track of the operation body, and projecting the second image onto the th image to obtain a third image.
Description
Technical Field
The present invention relates to information processing technologies, and in particular, to information processing methods and wearable electronic devices.
Background
However, as the demand of users is continuously increased, the users hope to add labels on the scene, such as personal graffiti or signature, etc., and the current wearable electronic devices cannot solve the problem.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention provide information processing methods and wearable electronic devices.
The information processing method provided by the embodiment of the invention is applied to wearable electronic equipment, the electronic equipment comprises supports, an image acquisition unit, lenses and a projection unit, the image acquisition unit, the lenses and the projection unit are arranged on the supports, the electronic equipment can maintain the relative position relation with the head of a user of the electronic equipment through the supports, and when the electronic equipment maintains the relative position relation with the head of the user of the electronic equipment through the supports, the user of the electronic equipment can watch the th image of a target area through the lenses, and the information processing method comprises the following steps:
when the electronic equipment user watches th image of the target area through the lens, acquiring the motion track of an operation body by using an image acquisition unit;
determining a second image corresponding to the motion track according to the motion track of the operation body;
projecting said second image onto said th image resulting in a third image.
The wearable electronic equipment provided by the embodiment of the invention comprises an support and a lens arranged on the support, wherein the electronic equipment can maintain the relative position relation with the head of a user of the electronic equipment through the support, and when the electronic equipment maintains the relative position relation with the head of the user of the electronic equipment through the support, the user of the electronic equipment can watch the th image of a target area through the lens, and the electronic equipment further comprises:
the image acquisition unit is used for acquiring the motion trail of the operation body when the user of the electronic equipment watches the th image of the target area through the lens;
the determining unit is used for determining a second image corresponding to the motion track according to the motion track of the operation body;
and the projection unit is used for projecting the second image onto the th image to obtain a third image.
According to the technical scheme, the wearable electronic equipment is particularly intelligent glasses, the wearable electronic equipment is provided with a support, the electronic equipment can maintain a relative position relation with the head of a user of the electronic equipment through the support, when the user wears the electronic equipment, the electronic equipment maintains a th relative position relation with the head of the user of the electronic equipment through the support, the user of the electronic equipment can watch a th image of a target area through the lens, when the user of the electronic equipment watches a th image of the target area through the lens, an image acquisition unit is used for acquiring a motion track of an operation body such as a finger of the user, then a second image corresponding to the motion track is determined according to the motion track of the operation body, wherein the second image can be personal graffiti or a signature of the user, then the second image is projected onto a th image to obtain a third image, and therefore, when the user sees a scene by utilizing the wearable electronic equipment of the embodiment of the wearable electronic equipment, a label can be added on the wearable electronic equipment, for example, a graffiti or a signature and then a scene added to meet the requirements of the user.
Drawings
FIG. 1 is a flow chart of an information processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an information processing method according to a second embodiment of the present invention;
FIG. 3 is a flowchart illustrating an information processing method according to a third embodiment of the present invention;
FIG. 4 is a flowchart illustrating an information processing method according to a fourth embodiment of the present invention;
FIG. 5 is a flowchart illustrating an information processing method according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to a second embodiment of the invention;
fig. 8 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings.
Fig. 1 is a schematic flow chart of an information processing method according to an embodiment of the present invention, where the information processing method in this example is applied to a wearable electronic device, the electronic device includes a support, an image capturing unit disposed on the support, a lens, and a projection unit, the electronic device can maintain a relative position relationship with a head of a user of the electronic device through the support, and the user of the electronic device can view a th image of a target area through the lens when the electronic device maintains a th relative position relationship with the head of the user of the electronic device through the support, as shown in fig. 1, the information processing method includes the following steps:
and 101, when the user of the electronic equipment watches the th image of the target area through the lens, acquiring the motion track of the operation body by using an image acquisition unit.
The information processing method is applied to wearable electronic equipment, particularly intelligent glasses, the wearable electronic equipment is provided with supports, the electronic equipment can maintain the relative position relation with the head of an electronic equipment user through the supports, when the user wears the electronic equipment on the head, the electronic equipment maintains the relative position relation with the head of the electronic equipment user through the supports, specifically, the intelligent glasses are supported on the nose of the user through the support pieces in the supports, and the glasses are hung on the ears of the user through the hanging pieces in the supports, so that the intelligent glasses are fixed on the head of the user.
In the embodiment of the invention, an image acquisition unit is arranged on a bracket of electronic equipment, the image acquisition unit is specifically a camera, a lens is also arranged on the bracket of the electronic equipment, when a user wears the electronic equipment, the lens is positioned in front of eyes of the user, and the user can see a certain scene positioned in front of the user through the lens, namely an th image of a target area, and a projection unit is also arranged on the bracket of the electronic equipment, the projection unit is specifically a micro projector, and the projection unit can project light of a certain image to eyes of the user, so that the user can watch a certain image.
In the embodiment of the invention, when a user watches th images of a target area through the lens, the image acquisition unit is started to acquire the motion trail of an operation body, wherein the operation body can be a finger of the user, a smart pen and other objects, concretely, when the user watches th images of the target area at a certain angle , series of graffiti, signatures and the like are written in the space through the target area in front of the lens by sliding the finger, and at the moment, the motion trail of the operation body is captured by the image acquisition unit in real time.
Step 102: and determining a second image corresponding to the motion track according to the motion track of the operation body.
In the embodiment of the present invention, the movement locus of the operating body forms the second image, where the second image may be a planar image or a stereoscopic image.
Specifically, a plurality of points are determined on a motion track as sampling points, and the spatial coordinates of the sampling points are obtained, wherein the spatial coordinates are three-dimensional spatial coordinates, for example, the spatial position of points on the motion track is determined based on the coordinates (x, y, z). based on this, the image acquisition unit in the embodiment of the present invention is a stereo camera which can capture the motion track of an operation body and obtain the three-dimensional spatial coordinates of the motion track.with respect to the spatial coordinates of a plurality of sampling points, if is scaled, for example, 85% of the spatial coordinates of the sampling points are located on the same plane, a second image is generated based on the sampling points on the same plane.
For the second image to be a stereoscopic image, the three-dimensional second image may be directly generated based on the three-dimensional spatial coordinates of the plurality of sampling points.
And 103, projecting the second image onto the th image to obtain a third image.
In the embodiment of the invention, the electronic equipment emits the light for forming the second image from the projection unit and projects the light to the eyes of the user, so that the user can watch the th image through the lens and simultaneously watch the second image, thus realizing the display effect of projecting the second image onto the th image.
Therefore, when a user sees scenes by using the wearable electronic device, labels, such as individual graffiti or signatures, can be added to the scenes, and then the scenes with the labels added are photographed, so that the requirements of the user are met.
Fig. 2 is a schematic flow chart of an information processing method according to a second embodiment of the present invention, where the information processing method in this example is applied to a wearable electronic device, the electronic device includes a support, an image capturing unit disposed on the support, a lens, and a projection unit, the electronic device can maintain a relative positional relationship with a head of a user of the electronic device through the support, and the user of the electronic device can view a th image of a target area through the lens when the electronic device maintains a th relative positional relationship with the head of the user of the electronic device through the support, as shown in fig. 2, the information processing method includes the following steps:
The information processing method is applied to wearable electronic equipment, particularly intelligent glasses, the wearable electronic equipment is provided with supports, the electronic equipment can maintain the relative position relation with the head of an electronic equipment user through the supports, when the user wears the electronic equipment on the head, the electronic equipment maintains the relative position relation with the head of the electronic equipment user through the supports, specifically, the intelligent glasses are supported on the nose of the user through the support pieces in the supports, and the glasses are hung on the ears of the user through the hanging pieces in the supports, so that the intelligent glasses are fixed on the head of the user.
In the embodiment of the invention, an image acquisition unit is arranged on a bracket of electronic equipment, the image acquisition unit is specifically a camera, a lens is also arranged on the bracket of the electronic equipment, when a user wears the electronic equipment, the lens is positioned in front of eyes of the user, and the user can see a certain scene positioned in front of the user through the lens, namely an th image of a target area, and a projection unit is also arranged on the bracket of the electronic equipment, the projection unit is specifically a micro projector, and the projection unit can project light of a certain image to eyes of the user, so that the user can watch a certain image.
In the embodiment of the invention, when a user watches th images of a target area through the lens, the image acquisition unit is started to acquire the motion trail of an operation body, wherein the operation body can be a finger of the user, a smart pen and other objects, concretely, when the user watches th images of the target area at a certain angle , series of graffiti, signatures and the like are written in the space through the target area in front of the lens by sliding the finger, and at the moment, the motion trail of the operation body is captured by the image acquisition unit in real time.
Step 202: and acquiring the space coordinates of the N sampling points on the motion trail.
Wherein N is more than or equal to 2.
In the embodiment of the invention, the motion track of the operation body forms the second image, wherein the second image is a plane type image.
And 203, when the space coordinates of M sampling points in the space coordinates of the N sampling points are positioned on the same plane, determining a th plane based on the space coordinates of the M sampling points.
Wherein M is preset value, and N is more than or equal to M and more than or equal to 2.
Specifically, a plurality of points are determined on a motion track as sampling points, and the spatial coordinates of the sampling points are obtained, wherein the spatial coordinates are three-dimensional spatial coordinates, for example, the spatial position of points on the motion track is determined based on the coordinates (x, y, z). based on this, the image acquisition unit in the embodiment of the present invention is a stereo camera which can capture the motion track of an operation body and obtain the three-dimensional spatial coordinates of the motion track.with respect to the spatial coordinates of a plurality of sampling points, if is scaled, for example, 85% of the spatial coordinates of the sampling points are located on the same plane, a second image is generated based on the sampling points on the same plane.
And step 204, generating a second image containing the M sampling points on the th plane.
In the embodiment of the invention, the electronic equipment emits the light for forming the second image from the projection unit and projects the light to the eyes of the user, so that the user can watch the th image through the lens and simultaneously watch the second image, thus realizing the display effect of projecting the second image onto the th image.
Therefore, when a user sees scenes by using the wearable electronic device, labels, such as individual graffiti or signatures, can be added to the scenes, and then the scenes with the labels added are photographed, so that the requirements of the user are met.
Fig. 3 is a schematic flow chart of an information processing method according to a third embodiment of the present invention, where the information processing method in this example is applied to a wearable electronic device, the electronic device includes a support, an image capturing unit disposed on the support, a lens, and a projection unit, the electronic device can maintain a relative positional relationship with a head of a user of the electronic device through the support, and the user of the electronic device can view a th image of a target area through the lens when the electronic device maintains a th relative positional relationship with the head of the user of the electronic device through the support, as shown in fig. 3, the information processing method includes the following steps:
and 301, when the user of the electronic equipment watches th images of a target area through the lens, capturing the motion track of the operating body in the target area by using the image acquisition unit.
The information processing method is applied to wearable electronic equipment, particularly intelligent glasses, the wearable electronic equipment is provided with supports, the electronic equipment can maintain the relative position relation with the head of an electronic equipment user through the supports, when the user wears the electronic equipment on the head, the electronic equipment maintains the relative position relation with the head of the electronic equipment user through the supports, specifically, the intelligent glasses are supported on the nose of the user through the support pieces in the supports, and the glasses are hung on the ears of the user through the hanging pieces in the supports, so that the intelligent glasses are fixed on the head of the user.
In the embodiment of the invention, an image acquisition unit is arranged on a bracket of electronic equipment, the image acquisition unit is specifically a camera, a lens is also arranged on the bracket of the electronic equipment, when a user wears the electronic equipment, the lens is positioned in front of eyes of the user, and the user can see a certain scene positioned in front of the user through the lens, namely an th image of a target area, and a projection unit is also arranged on the bracket of the electronic equipment, the projection unit is specifically a micro projector, and the projection unit can project light of a certain image to eyes of the user, so that the user can watch a certain image.
In the embodiment of the invention, when a user watches th images of a target area through the lens, the image acquisition unit is started to acquire the motion trail of an operation body, wherein the operation body can be a finger of the user, a smart pen and other objects, concretely, when the user watches th images of the target area at a certain angle , series of graffiti, signatures and the like are written in the space through the target area in front of the lens by sliding the finger, and at the moment, the motion trail of the operation body is captured by the image acquisition unit in real time.
Step 302: and acquiring the three-dimensional space coordinates of the N sampling points on the motion trail.
Wherein N is more than or equal to 2.
In the embodiment of the invention, the motion track of the operation body forms a second image, wherein the second image is a stereoscopic image.
Step 303: and generating a three-dimensional second image corresponding to the motion trail based on the three-dimensional space coordinates of the N sampling points.
Specifically, a plurality of points are determined on a motion trajectory as sampling points, and spatial coordinates of the sampling points are obtained, where the spatial coordinates are three-dimensional spatial coordinates, and for example, a spatial position of points on the motion trajectory is determined based on coordinates (x, y, z).
And step 304, projecting the second image onto the th image to obtain a third image.
In the embodiment of the invention, the electronic equipment emits the light for forming the second image from the projection unit and projects the light to the eyes of the user, so that the user can watch the th image through the lens and simultaneously watch the second image, thus realizing the display effect of projecting the second image onto the th image.
Therefore, when a user sees scenes by using the wearable electronic device, labels, such as individual graffiti or signatures, can be added to the scenes, and then the scenes with the labels added are photographed, so that the requirements of the user are met.
Fig. 4 is a schematic flow chart of an information processing method according to a fourth embodiment of the present invention, where the information processing method in this example is applied to a wearable electronic device, the electronic device includes a support, an image capturing unit disposed on the support, a lens, and a projection unit, the electronic device can maintain a relative positional relationship with a head of a user of the electronic device through the support, and the user of the electronic device can view a th image of a target area through the lens when the electronic device maintains a th relative positional relationship with the head of the user of the electronic device through the support, as shown in fig. 4, the information processing method includes the following steps:
and step 401, when the electronic equipment user watches the th image of the target area through the lens, acquiring the motion track of the operation body by using an image acquisition unit.
The information processing method is applied to wearable electronic equipment, particularly intelligent glasses, the wearable electronic equipment is provided with supports, the electronic equipment can maintain the relative position relation with the head of an electronic equipment user through the supports, when the user wears the electronic equipment on the head, the electronic equipment maintains the relative position relation with the head of the electronic equipment user through the supports, specifically, the intelligent glasses are supported on the nose of the user through the support pieces in the supports, and the glasses are hung on the ears of the user through the hanging pieces in the supports, so that the intelligent glasses are fixed on the head of the user.
In the embodiment of the invention, an image acquisition unit is arranged on a bracket of electronic equipment, the image acquisition unit is specifically a camera, a lens is also arranged on the bracket of the electronic equipment, when a user wears the electronic equipment, the lens is positioned in front of eyes of the user, and the user can see a certain scene positioned in front of the user through the lens, namely an th image of a target area, and a projection unit is also arranged on the bracket of the electronic equipment, the projection unit is specifically a micro projector, and the projection unit can project light of a certain image to eyes of the user, so that the user can watch a certain image.
In the embodiment of the invention, when a user watches th images of a target area through the lens, the image acquisition unit is started to acquire the motion trail of an operation body, wherein the operation body can be a finger of the user, a smart pen and other objects, concretely, when the user watches th images of the target area at a certain angle , series of graffiti, signatures and the like are written in the space through the target area in front of the lens by sliding the finger, and at the moment, the motion trail of the operation body is captured by the image acquisition unit in real time.
And 402, judging whether the motion trail meets th conditions or not to obtain a judgment result.
In the embodiment of the present invention, the th condition is used to define the shape of the motion trajectory, for example, when the user draws a gesture, such as two fingers, in comparison with boxes, it indicates that the motion trajectory satisfies the th condition.
And 403, when the judgment result shows that the motion track meets the th condition, determining a th sub-area in the target area viewed through the lens according to the motion track.
In the adapting step 402, the frame determined according to the motion trajectory projects area range, that is, sub-area in the target area.
And step 404, taking the image in the th sub-area as a second image corresponding to the motion trail.
In this way, the second image corresponding to the movement locus can be determined according to the movement locus of the operation body.
Step 405: and acquiring the second image by using the image acquisition unit, and storing the second image in a storage unit.
In embodiments of the invention, the th image may be viewed through the lens at th angle, and the second image may be an image of a region viewed through the lens at a second angle, where the th and second angles refer to the angle of the user relative to the scene, and the th and second angles may be the same angle or different angles.
And 406, acquiring the second image from the storage unit, and superposing the second image on the th image to obtain a third image.
In the embodiment of the invention, the electronic equipment emits the light for forming the second image from the projection unit and projects the light to the eyes of the user, so that the user can watch the th image through the lens and simultaneously watch the second image, thus realizing the display effect of projecting the second image onto the th image.
In the embodiment of the invention, a user can superpose a second image of a scene which is obtained by shooting in advance on a scene which is seen through a lens at present, and then, a third image which is finally obtained can be shot by using the image acquisition unit, so that when the user sees a scene by using the wearable electronic equipment in the embodiment of the invention, labels , such as personal graffiti or signatures, can be added on the scene, and then the scene with the added labels is shot, thereby meeting the requirements of the user.
Fig. 5 is a schematic flow chart of an information processing method according to a fifth embodiment of the present invention, where the information processing method in this example is applied to a wearable electronic device, the electronic device includes a support, an image capturing unit disposed on the support, a lens, and a projection unit, the electronic device can maintain a relative positional relationship with a head of a user of the electronic device through the support, and the user of the electronic device can view a th image of a target area through the lens when the electronic device maintains a th relative positional relationship with the head of the user of the electronic device through the support, as shown in fig. 5, the information processing method includes the following steps:
and step 501, when the user of the electronic equipment watches th images of the target area through the lens, acquiring the motion track of the operation body by using an image acquisition unit.
The information processing method is applied to wearable electronic equipment, particularly intelligent glasses, the wearable electronic equipment is provided with supports, the electronic equipment can maintain the relative position relation with the head of an electronic equipment user through the supports, when the user wears the electronic equipment on the head, the electronic equipment maintains the relative position relation with the head of the electronic equipment user through the supports, specifically, the intelligent glasses are supported on the nose of the user through the support pieces in the supports, and the glasses are hung on the ears of the user through the hanging pieces in the supports, so that the intelligent glasses are fixed on the head of the user.
In the embodiment of the invention, an image acquisition unit is arranged on a bracket of electronic equipment, the image acquisition unit is specifically a camera, a lens is also arranged on the bracket of the electronic equipment, when a user wears the electronic equipment, the lens is positioned in front of eyes of the user, and the user can see a certain scene positioned in front of the user through the lens, namely an th image of a target area, and a projection unit is also arranged on the bracket of the electronic equipment, the projection unit is specifically a micro projector, and the projection unit can project light of a certain image to eyes of the user, so that the user can watch a certain image.
In the embodiment of the invention, when a user watches th images of a target area through the lens, the image acquisition unit is started to acquire the motion trail of an operation body, wherein the operation body can be a finger of the user, a smart pen and other objects, concretely, when the user watches th images of the target area at a certain angle , series of graffiti, signatures and the like are written in the space through the target area in front of the lens by sliding the finger, and at the moment, the motion trail of the operation body is captured by the image acquisition unit in real time.
And 502, judging whether the motion track meets th conditions to obtain a judgment result.
In the embodiment of the present invention, the th condition is used to define the shape of the motion trajectory, for example, when the user draws a gesture, such as two fingers, in comparison with boxes, it indicates that the motion trajectory satisfies the th condition.
And 503, when the judgment result shows that the motion track meets the th condition, determining a th sub-area in the target area viewed through the lens according to the motion track.
In the adapting step 502, the frame determined according to the motion trajectory projects area range, that is, sub-area in the target area.
And step 504, taking the image in the th sub-area as a second image corresponding to the motion trail.
In this way, the second image corresponding to the movement locus can be determined according to the movement locus of the operation body.
And 505, when detecting that the th sub-area corresponding to the motion track formed by the operation body is enlarged/reduced, taking the image in the th sub-area after enlargement/reduction as a second image corresponding to the motion track.
In the embodiment of the invention, when the user enlarges or reduces the frame drawn by the gesture, the view range of the second image is enlarged or reduced.
Step 506: and acquiring the second image by using the image acquisition unit, and storing the second image in a storage unit.
In embodiments of the invention, the th image may be viewed through the lens at th angle, and the second image may be an image of a region viewed through the lens at a second angle, where the th and second angles refer to the angle of the user relative to the scene, and the th and second angles may be the same angle or different angles.
And 507, superposing the second image on the th image to obtain a third image.
In the embodiment of the invention, the electronic equipment emits the light for forming the second image from the projection unit and projects the light to the eyes of the user, so that the user can watch the th image through the lens and simultaneously watch the second image, thus realizing the display effect of projecting the second image onto the th image.
In the embodiment of the invention, a user can superpose a second image of a scene which is obtained by shooting in advance on a scene which is seen through a lens at present, and then, a third image which is finally obtained can be shot by using the image acquisition unit, so that when the user sees a scene by using the wearable electronic equipment in the embodiment of the invention, labels , such as personal graffiti or signatures, can be added on the scene, and then the scene with the added labels is shot, thereby meeting the requirements of the user.
Fig. 6 is a schematic structural diagram of a wearable electronic device according to an embodiment of the present invention, where the electronic device includes a support and a lens disposed on the support, the electronic device can maintain a relative position relationship with a head of a user of the electronic device through the support, and when the electronic device maintains a th relative position relationship with the head of the user of the electronic device through the support, the user of the electronic device can view a th image of a target area through the lens, and the electronic device further includes:
the image acquisition unit 61 is used for acquiring the motion trail of an operation body when the user of the electronic equipment watches the th image of the target area through the lens;
a determining unit 62, configured to determine, according to a motion trajectory of the operation body, a second image corresponding to the motion trajectory;
a projection unit 63, configured to project the second image onto the th image, so as to obtain a third image.
Preferably, the projection unit 63 is configured to emit light for forming the second image, and project the light to the eyes of the user of the electronic device, so that the user can view the th image through the lens and view the second image at the same time.
Those skilled in the art will understand that the implementation functions of each unit in the portable electronic device of the embodiment of the present invention can be understood by referring to the related description of the information processing method. The functions of the units in the portable electronic device according to the embodiment of the present invention may be implemented by a program running on a processor, or may be implemented by specific logic circuits.
Fig. 7 is a schematic structural composition diagram of a wearable electronic device according to a second embodiment of the present invention, where the electronic device includes an support and a lens disposed on the support, the electronic device can maintain a relative positional relationship with a head of a user of the electronic device through the support, and when the electronic device maintains a relative positional relationship with the head of the user of the electronic device through the support, the user of the electronic device can view a th image of a target area through the lens, and the electronic device further includes:
the image acquisition unit 71 is used for acquiring the motion trail of an operation body when the user of the electronic equipment watches the th image of the target area through the lens;
a determining unit 72, configured to determine, according to a motion trajectory of the operation body, a second image corresponding to the motion trajectory;
a projection unit 73, configured to project the second image onto the th image, so as to obtain a third image.
Preferably, the projection unit 73 is configured to emit light for forming the second image, and project the light to the eyes of the user of the electronic device, so that the user can view the th image through the lens and view the second image at the same time.
Preferably, the image acquisition unit 71 is further configured to capture a motion trajectory of the operating body in the target area;
the determination unit 72 includes:
an obtaining subunit 721, configured to obtain spatial coordinates of N sampling points on the motion trajectory, where N is greater than or equal to 2;
the determining subunit 722 is configured to determine a th plane based on the spatial coordinates of M sampling points when the spatial coordinates of M sampling points in the spatial coordinates of the N sampling points are located on the same plane, where M is a preset value, and N is greater than or equal to M and greater than or equal to 2;
a generating subunit 723, configured to generate a second image containing the M sampling points on the -th plane.
Those skilled in the art will understand that the implementation functions of each unit in the portable electronic device of the embodiment of the present invention can be understood by referring to the related description of the information processing method. The functions of the units in the portable electronic device according to the embodiment of the present invention may be implemented by a program running on a processor, or may be implemented by specific logic circuits.
Fig. 8 is a schematic structural composition diagram of a wearable electronic device according to a third embodiment of the present invention, where the electronic device includes an support and a lens disposed on the support, the electronic device can maintain a relative positional relationship with a head of a user of the electronic device through the support, and when the electronic device maintains a relative positional relationship with the head of the user of the electronic device through the support, the user of the electronic device can view a th image of a target area through the lens, and the electronic device further includes:
the image acquisition unit 81 is used for acquiring the motion trail of an operation body when the user of the electronic equipment watches the th image of the target area through the lens;
a determining unit 82, configured to determine, according to a motion trajectory of the operation body, a second image corresponding to the motion trajectory;
a projection unit 83, configured to project the second image onto the th image, so as to obtain a third image.
Preferably, the projection unit 83 is configured to emit light for forming the second image, and project the light to the eyes of the user of the electronic device, so that the user can view the th image through the lens and view the second image at the same time.
Preferably, the image capturing unit 81 is further configured to capture a motion trajectory of the operating body in the target area;
the determination unit 82 includes:
an obtaining subunit 821, configured to obtain three-dimensional space coordinates of N sampling points on the motion trajectory, where N is greater than or equal to 2;
a generating subunit 822, configured to generate a three-dimensional second image corresponding to the motion trajectory based on the three-dimensional space coordinates of the N sampling points;
accordingly, the projection unit 83 is further configured to superimpose the three-dimensional second image onto the th image, so that the resulting third image can represent a stereoscopic second image on the th image.
Those skilled in the art will understand that the implementation functions of each unit in the portable electronic device of the embodiment of the present invention can be understood by referring to the related description of the information processing method. The functions of the units in the portable electronic device according to the embodiment of the present invention may be implemented by a program running on a processor, or may be implemented by specific logic circuits.
Fig. 9 is a schematic structural composition diagram of a wearable electronic device according to a fourth embodiment of the present invention, where the electronic device includes an support and a lens disposed on the support, the electronic device can maintain a relative positional relationship with a head of a user of the electronic device through the support, and when the electronic device maintains a relative positional relationship with the head of the user of the electronic device through the support, the user of the electronic device can view a th image of a target area through the lens, and the electronic device further includes:
the image acquisition unit 91 is used for acquiring the motion trail of an operation body when the user of the electronic equipment watches the th image of the target area through the lens;
a determining unit 92, configured to determine, according to a motion trajectory of the operation body, a second image corresponding to the motion trajectory;
a projection unit 93, configured to project the second image onto the th image, so as to obtain a third image.
Preferably, the projection unit 93 is configured to emit light for forming the second image, and project the light to the eyes of the user of the electronic device, so that the user can view the th image through the lens and view the second image at the same time.
Preferably, the determining unit 92 includes:
a judging subunit 921, configured to judge whether the motion trajectory meets an -th condition, so as to obtain a judgment result;
the determining subunit 922 is configured to determine, when the determination result indicates that the motion trajectory satisfies the th condition, a th sub-region in the target region viewed through the lens according to the motion trajectory, and take an image in the th sub-region as a second image corresponding to the motion trajectory;
the image capturing unit 91 is further configured to capture the second image by using the image capturing unit, and store the second image in a storage unit.
The projection unit 93 includes:
an acquiring sub-unit 931 configured to acquire the second image from the storage unit;
a superimposing subunit 932, configured to superimpose the second image onto the th image, resulting in a third image.
Those skilled in the art will understand that the implementation functions of each unit in the portable electronic device of the embodiment of the present invention can be understood by referring to the related description of the information processing method. The functions of the units in the portable electronic device according to the embodiment of the present invention may be implemented by a program running on a processor, or may be implemented by specific logic circuits.
Fig. 10 is a schematic structural composition diagram of a wearable electronic device according to a fifth embodiment of the present invention, where the electronic device includes an support and a lens disposed on the support, the electronic device can maintain a relative positional relationship with a head of a user of the electronic device through the support, and when the electronic device maintains a relative positional relationship with the head of the user of the electronic device through the support, the user of the electronic device can view a th image of a target area through the lens, and the electronic device further includes:
the image acquisition unit 11 is used for acquiring the motion trail of an operation body when the user of the electronic equipment watches the th image of the target area through the lens;
a determining unit 12, configured to determine, according to a motion trajectory of the operation body, a second image corresponding to the motion trajectory;
a projection unit 13, configured to project the second image onto the th image, so as to obtain a third image.
Preferably, the projection unit 13 is configured to emit light for forming the second image, and project the light to the eyes of the user of the electronic device, so that the user can view the th image through the lens and view the second image at the same time.
Preferably, the determination unit 12 includes:
the judging subunit 121 is configured to judge whether the motion trajectory meets an th condition, so as to obtain a judgment result;
a determining subunit 122, configured to determine, when the determination result indicates that the motion trajectory satisfies a condition, a sub-region in the target region viewed through the lens according to the motion trajectory, and take an image in the sub-region as a second image corresponding to the motion trajectory;
the image acquisition unit 11 is further configured to acquire the second image by using the image acquisition unit, and store the second image in a storage unit.
The determining subunit 122 is further configured to, when it is detected that the th sub-region corresponding to the motion trajectory formed by the operating body becomes larger/smaller, take the image in the th sub-region after the enlargement/reduction as the second image corresponding to the motion trajectory.
Those skilled in the art will understand that the implementation functions of each unit in the portable electronic device of the embodiment of the present invention can be understood by referring to the related description of the information processing method. The functions of the units in the portable electronic device according to the embodiment of the present invention may be implemented by a program running on a processor, or may be implemented by specific logic circuits.
The technical schemes described in the embodiments of the present invention can be combined arbitrarily without conflict.
The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units into only logical functional divisions, and other divisions may be possible in actual practice, e.g., multiple units or components may be combined, or may be integrated into another systems, or features may be omitted or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in places, may also be distributed on multiple network units, and some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into second processing units, or each unit may be separately used as units, or two or more units may be integrated into units, and the integrated units may be implemented in the form of hardware, or in the form of hardware and software functional units.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.
Claims (8)
- The information processing method is applied to wearable electronic equipment, the electronic equipment comprises a bracket, an image acquisition unit, a lens and a projection unit, the image acquisition unit, the lens and the projection unit are arranged on the bracket, the electronic equipment can maintain a relative position relation with the head of a user of the electronic equipment through the bracket, and when the electronic equipment maintains a th relative position relation with the head of the user of the electronic equipment through the bracket, the user of the electronic equipment can watch a th image of a target area through the lens, and the information processing method comprises the following steps:when the electronic equipment user watches th image of the target area through the lens, acquiring the motion track of an operation body by using an image acquisition unit;determining a second image corresponding to the motion track according to the motion track of the operation body;projecting the second image onto the th image to obtain a third image;determining a second image corresponding to the motion trail according to the motion trail of the operation body, wherein the determining comprises the following steps:judging whether the motion track meets the th condition to obtain a judgment result;when the judgment result shows that the motion track meets the th condition, determining a th sub-area in the target area viewed through the lens according to the motion track;taking the image in the th sub-area as a second image corresponding to the motion trail;and acquiring the second image by using the image acquisition unit, and storing the second image in a storage unit.
- 2. The information processing method according to claim 1, wherein said projecting the second image onto the th image, resulting in a third image, comprises:acquiring the second image from the storage unit;and superposing the second image on the th image to obtain a third image.
- 3. The information processing method according to claim 1, the method further comprising:when the th sub-area corresponding to the motion track formed by the operation body is detected to be enlarged/reduced, the image in the th sub-area after enlargement/reduction is taken as the second image corresponding to the motion track.
- 4. The information processing method of any of claims 1-3, projecting the second image onto the th image resulting in a third image, comprising:and emitting light for forming the second image from the projection unit and projecting the light to the eyes of the user of the electronic equipment, so that the user can watch the th image through the lens and the second image simultaneously.
- 5, wearable electronic device, the electronic device comprising a support, a lens disposed on the support, the electronic device being capable of maintaining a relative positional relationship with a head of a user of the electronic device via the support, the user of the electronic device being capable of viewing a th image of a target area through the lens when the electronic device is maintained in a th relative positional relationship with the head of the user of the electronic device via the support, the electronic device further comprising:the image acquisition unit is used for acquiring the motion trail of the operation body when the user of the electronic equipment watches the th image of the target area through the lens;the determining unit is used for determining a second image corresponding to the motion track according to the motion track of the operation body;a projection unit, configured to project the second image onto the th image, so as to obtain a third image;the determination unit includes:the judging subunit is used for judging whether the motion track meets the th condition to obtain a judgment result;the determining subunit is used for determining a th sub-area in the target area observed through the lens according to the motion track when the judgment result shows that the motion track meets the th condition, and taking the image in the th sub-area as a second image corresponding to the motion track;the image acquisition unit is further used for acquiring the second image by using the image acquisition unit and storing the second image to the storage unit.
- 6. The wearable electronic device of claim 5, the projection unit comprising:an acquisition subunit configured to acquire the second image from the storage unit;and the superposition subunit is used for superposing the second image on the th image to obtain a third image.
- 7. The wearable electronic device according to claim 5, wherein the determining subunit is further configured to, when it is detected that the th sub-region corresponding to the motion trajectory formed by the operating body is enlarged/reduced, take the image in the th sub-region after enlargement/reduction as the second image corresponding to the motion trajectory.
- 8. The wearable electronic device of any of claims 5-7, wherein the projection unit is configured to emit light for forming the second image and project the light to the eye of a user of the electronic device such that the user views the second image while viewing the th image through the lens.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410510378.6A CN104298350B (en) | 2014-09-28 | 2014-09-28 | information processing method and wearable electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410510378.6A CN104298350B (en) | 2014-09-28 | 2014-09-28 | information processing method and wearable electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104298350A CN104298350A (en) | 2015-01-21 |
CN104298350B true CN104298350B (en) | 2020-01-31 |
Family
ID=52318112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410510378.6A Active CN104298350B (en) | 2014-09-28 | 2014-09-28 | information processing method and wearable electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104298350B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104834383A (en) * | 2015-05-26 | 2015-08-12 | 联想(北京)有限公司 | Input method and electronic device |
CN106095349B (en) * | 2016-06-14 | 2019-04-26 | 无锡天脉聚源传媒科技有限公司 | A kind of signature Method of printing and device |
CN108509430A (en) * | 2018-04-10 | 2018-09-07 | 京东方科技集团股份有限公司 | Intelligent glasses and its interpretation method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102445756A (en) * | 2010-11-18 | 2012-05-09 | 微软公司 | Automatic focus improvement for augmented reality displays |
CN102779000A (en) * | 2012-05-03 | 2012-11-14 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9292973B2 (en) * | 2010-11-08 | 2016-03-22 | Microsoft Technology Licensing, Llc | Automatic variable virtual focus for augmented reality displays |
CN102609734A (en) * | 2011-10-25 | 2012-07-25 | 北京新岸线网络技术有限公司 | Machine vision-based handwriting recognition method and system |
-
2014
- 2014-09-28 CN CN201410510378.6A patent/CN104298350B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102445756A (en) * | 2010-11-18 | 2012-05-09 | 微软公司 | Automatic focus improvement for augmented reality displays |
CN102779000A (en) * | 2012-05-03 | 2012-11-14 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
Also Published As
Publication number | Publication date |
---|---|
CN104298350A (en) | 2015-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10674142B2 (en) | Optimized object scanning using sensor fusion | |
EP3195595B1 (en) | Technologies for adjusting a perspective of a captured image for display | |
JP6860488B2 (en) | Mixed reality system | |
CN106255943B (en) | Transitioning between body-locked augmented reality and world-locked augmented reality | |
US9569892B2 (en) | Image capture input and projection output | |
US8878846B1 (en) | Superimposing virtual views of 3D objects with live images | |
CN114647318A (en) | Method of tracking the position of a device | |
CN110709897B (en) | Shadow generation for image content inserted into an image | |
US11137824B2 (en) | Physical input device in virtual reality | |
EP3106963B1 (en) | Mediated reality | |
CN111610998A (en) | AR scene content generation method, display method, device and storage medium | |
US20180288387A1 (en) | Real-time capturing, processing, and rendering of data for enhanced viewing experiences | |
US20240103684A1 (en) | Methods for displaying objects relative to virtual surfaces | |
JP7367689B2 (en) | Information processing device, information processing method, and recording medium | |
Greenwald et al. | Eye gaze tracking with google cardboard using purkinje images | |
CN104298350B (en) | information processing method and wearable electronic equipment | |
CN110895433B (en) | Method and apparatus for user interaction in augmented reality | |
US20230260235A1 (en) | Information processing apparatus, information processing method, and information processing system | |
EP3038061A1 (en) | Apparatus and method to display augmented reality data | |
CN112987914B (en) | Method and apparatus for content placement | |
US11302285B1 (en) | Application programming interface for setting the prominence of user interface elements | |
US20230377480A1 (en) | Method and Device for Presenting a Guided Stretching Session | |
CN106127858B (en) | Information processing method and electronic equipment | |
WO2022155113A1 (en) | Method and device for visualizing multi-modal inputs | |
CN105630170B (en) | Information processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |