CN104199556B - A kind of information processing method and device - Google Patents

A kind of information processing method and device Download PDF

Info

Publication number
CN104199556B
CN104199556B CN201410486659.2A CN201410486659A CN104199556B CN 104199556 B CN104199556 B CN 104199556B CN 201410486659 A CN201410486659 A CN 201410486659A CN 104199556 B CN104199556 B CN 104199556B
Authority
CN
China
Prior art keywords
distance
plane
action
display
virtual plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410486659.2A
Other languages
Chinese (zh)
Other versions
CN104199556A (en
Inventor
温泽中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410486659.2A priority Critical patent/CN104199556B/en
Publication of CN104199556A publication Critical patent/CN104199556A/en
Application granted granted Critical
Publication of CN104199556B publication Critical patent/CN104199556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present invention, which provides a kind of information processing method and device, wherein information processing method, to be included:Display content is shown using wear-type Display Technique;The virtual plane where the display is obtained in a first direction according to the first distance of the first reference planes;Pass through the first action of the operating body of described image harvester collection in a first direction;The operation planar at the first action place is obtained in a first direction according to the second distance of first reference planes;According to first distance and the second distance, the position relationship between the operation planar where first action and the virtual plane where the display is determined, to complete the collision detection between operating body and virtual plane.For prior art, the embodiment of the present invention simplifies the mode of collision detection by way of the first distance and second distance carry out collision detection, and the data volume used in collision detection process reduces, so as to improving detection efficiency.

Description

Information processing method and device
Technical Field
The present invention relates to the field of identification technologies, and in particular, to an information processing method and apparatus.
Background
Augmented Reality (AR) technology is a new technology developed on the basis of virtual Reality, and can generate a virtual object through a computer system and superimpose the generated virtual object on a real scene, thereby realizing 'Augmented' of Reality.
The technology adopted by the computer system to generate the virtual object at present is as follows: three-dimensional (D) model construction technology, which constructs a virtual object displayed in a 3D model by a 3D model construction technology, and displays the virtual object in a real scene. And when the user finger touches the virtual object, the computer system can also construct a 3D model of the user finger according to the position of the user finger in the space. If the 3D model of the user finger and the 3D model of the virtual object have an intersection point in space, the fact that the user finger touches the virtual object is indicated, and collision detection of the user finger and the virtual object is achieved.
However, whether the user finger collides with the virtual object is detected by the 3D model construction technology, a 3D model of the user finger and the virtual object needs to be constructed, and then whether the user finger collides with the virtual object is determined by judging whether the two 3D models have an intersection point in space, so that the complexity of collision detection is increased due to the complexity of the 3D model construction technology, and the collision detection efficiency is reduced due to the construction efficiency of the 3D model construction technology and the data volume required for construction.
Disclosure of Invention
In view of this, embodiments of the present invention provide an information processing method and apparatus applied to a wearable electronic device, which simplify a collision detection method and improve detection efficiency.
In order to achieve the purpose, the invention provides the following technical scheme:
the embodiment of the invention provides an information processing method, which is applied to wearable electronic equipment, wherein the wearable electronic equipment comprises an image acquisition device, and the information processing method comprises the following steps:
displaying the display content using head-mounted display technology;
acquiring a first distance between the virtual plane where the display is located and a first reference plane in a first direction, wherein the axis of the first reference plane in the first direction is parallel to the axis of the image acquisition device in the first direction;
acquiring a first action of an operation body in a first direction through the image acquisition device;
acquiring a second distance of the operation plane where the first action is located in the first direction according to the first reference plane;
and determining the position relation between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance.
Preferably, when it is determined that the operation plane in which the first action is located is the virtual plane in which the display is located, the method further includes:
acquiring a first image of the operation body when the operation body executes the first action through the image acquisition device;
identifying the first image by applying an image identification technology to obtain a position formed by the operation body on the virtual plane when the first action is executed;
acquiring and executing a first instruction corresponding to the first action based on a position formed on the virtual plane during the first action;
displaying display content obtained after the first instruction is executed on the virtual plane by using head-mounted display technology.
Preferably, the obtaining a second distance of the operation plane where the first action is located in the first direction according to the first reference plane includes:
the focusing distance relative to the operation plane obtained by using the automatic focusing technology of the image acquisition device is the second distance.
Preferably, the method further comprises: acquiring a third distance of the image acquisition device in the first direction according to the first reference plane;
acquiring a fourth distance of the image acquisition device in the first direction according to the operation plane, wherein the fourth distance is a focusing distance relative to the virtual plane, which is obtained by using an automatic focusing technology of the image acquisition device;
the sum of the fourth distance and the third distance is the second distance, or the difference between the fourth distance and the third distance is the second distance.
Preferably, determining a position relationship between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance includes:
comparing the first distance with the second distance to obtain a comparison result;
and when the comparison result shows that the first distance and the second distance are the same, determining that the operation plane where the first action is located is the virtual plane where the display is located.
An embodiment of the present invention further provides an information processing apparatus applied to a wearable electronic device, where the wearable electronic device includes an image acquisition device, and the information processing apparatus includes:
a display unit for displaying display content using head-mounted display technology;
the first obtaining unit is used for obtaining a first distance between the virtual plane where the display is located and a first reference plane in a first direction, and an axis of the first reference plane in the first direction is parallel to an axis of the image acquisition device in the first direction;
the acquisition unit is used for acquiring a first action of the operation body in a first direction through the image acquisition device;
the second obtaining unit is used for obtaining a second distance of the operation plane where the first action is located in the first direction according to the first reference plane;
and the determining unit is used for determining the position relation between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance.
Preferably, the apparatus further comprises:
the third acquisition unit is used for acquiring a first image of the operation body when the operation body executes the first action through the image acquisition device;
the recognition unit is used for recognizing the first image by applying an image recognition technology to obtain a position formed by the operation body on the virtual plane when the first action is executed;
the execution unit is used for acquiring and executing a first instruction corresponding to the first action based on a position formed on the virtual plane during the first action;
the display unit is further used for displaying display content obtained after the first instruction is executed on the virtual plane by using a head-mounted display technology.
Preferably, the acquiring, by the second acquiring unit, a second distance of the operation plane where the first action is located in the first direction according to the first reference plane includes: the focusing distance relative to the operation plane obtained by using the automatic focusing technology of the image acquisition device is the second distance.
Preferably, the acquiring, by the second acquiring unit, a second distance of the operation plane where the first action is located in the first direction according to the first reference plane includes: and acquiring a third distance of the image acquisition device in the first direction according to the first reference plane and acquiring a fourth distance of the image acquisition device in the first direction according to the operation plane, wherein the sum of the fourth distance and the third distance is used as the second distance, or the difference between the fourth distance and the third distance is used as the second distance, and the fourth distance is a focusing distance relative to the virtual plane, which is obtained by using an automatic focusing technology of the image acquisition device.
Preferably, the determining unit determines, according to the first distance and the second distance, a position relationship between an operation plane where the first action is located and a virtual plane where the display is located, including:
comparing the first distance with the second distance to obtain a comparison result;
and when the comparison result shows that the first distance and the second distance are the same, determining that the operation plane where the first action is located is the virtual plane where the display is located.
It can be seen from the foregoing technical solutions that, in the information processing method and apparatus provided in the embodiments of the present invention, a first distance between a virtual plane where the display is located and a first reference plane in a first direction and a second distance between an operation plane where the first action is located and the first reference plane in the first plane are obtained, and then a position relationship between the operation plane where the first action is located and the virtual plane where the display is located is determined according to the first distance and the second distance, so as to complete collision detection between the operation body and the virtual plane. Compared with the prior art, the embodiment of the invention simplifies the collision detection mode by the way of performing collision detection through the first distance and the second distance, and reduces the data amount used in the collision detection process, thereby improving the detection efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an information processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a wearable electronic device according to an embodiment of the present invention;
fig. 3 is another schematic structural diagram of a wearable electronic device according to an embodiment of the present invention;
FIG. 4 is another flow chart of an information processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a virtual interface according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention;
fig. 7 is another schematic structural diagram of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
The central idea of the information processing method and device provided by the embodiment of the invention is as follows: the distance determination mode replaces the existing mode of adopting a 3D model to perform collision detection, and the data volume in the collision detection process is reduced to improve the detection efficiency.
In order to make those skilled in the art better understand the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of an information processing method according to an embodiment of the present invention is shown, in which the information processing method can be applied to a wearable electronic device, and the wearable electronic device includes an image capturing device, where the image capturing device is configured to capture an image in a first direction.
As shown in fig. 2, the wearable electronic device may be a pair of wearable glasses, an image capturing device 2 (e.g., a camera) is mounted on one of the glasses legs 1 of the glasses, and an image in front of the eyes is captured by the image capturing device 2 with the front of the eyes being looked at as a first direction. The information processing method shown in fig. 1 applied to the wearable electronic device may include the following steps:
101: display content is displayed using head-mounted display technology. The head-mounted display technology may project the display content on a virtual plane in the first direction by using the principle of optical reflection, and the height of the virtual plane may be parallel to the human eye, so as to facilitate the human eye to refer to the display content, as shown in fig. 3. Fig. 3 shows the wearable glasses shown in fig. 2 using a head-mounted display technology to display the display content in a virtual plane located in front of the front view of the human eyes.
To enable the use of head-mounted Display technology, the wearable glasses shown in fig. 2 may also be equipped with a HUD (head up Display), as shown in fig. 3. The HUD3 can display content using head-mounted display technology, with the axis of the HUD3 parallel to the axis of the image capture device 2.
102: and acquiring a first distance between the displayed virtual plane and a first reference plane in the first direction, wherein the axis of the first reference plane in the first direction is parallel to the axis of the image acquisition device in the first direction.
In the embodiment of the present invention, the first reference plane is a reference plane for obtaining the first distance and the second distance, which may be a plane where a mirror surface of the wearable glasses shown in fig. 2 is located, and which plane is specifically selected may refer to the setting of the HUD3 in the wearable electronic device.
For example, when the focus of the HUD3 is located on the plane of the mirror surface of the wearable glasses shown in fig. 2, the plane of the mirror surface can be directly used as the first reference plane; when the focus of the HUD3 is not in the plane of the mirror surface of the wearable glasses shown in fig. 2, the plane of the focus of the HUD3 is taken as the first reference plane, and the plane of the focus is parallel to the plane of the mirror surface.
Thus when the plane in which the focus of the HUD3 is located is the first reference plane, the first distance may be the focal length of a virtual plane formed when head-mounted display technology is used, the focal length being the distance from the optical center of the lens to the focal point of the light collection, the virtual plane may be formed at the focal length position, and thus the focal length used when the head-mounted display technology is used to form the virtual plane may be the first distance. When displayed using head-mounted display technology using the HUD3, the first distance is then the focal distance set when designing the HUD 3.
If the plane on which the focus of the HUD3 is located is not the first reference plane, a fifth distance between the focus of the HUD3 and the first reference plane may be first obtained, and then the first distance may be obtained by determining a calculation method between the fifth distance and the focal length according to a positional relationship between the plane on which the focus of the HUD3 is located and the first reference plane. The method specifically comprises the following steps:
in a first direction, the first distance is the sum of the fifth distance and the focal length when the plane of focus of the HUD3 is between the first reference plane and the virtual plane; the first distance is the difference between the fifth distance and the focal length when the first reference plane is located between the plane of the focus of the HUD3 and the virtual plane.
103: a first motion of an operation body in a first direction is acquired by an image acquisition device.
104: and acquiring a second distance of the operation plane where the first action is located according to the first reference plane in the first direction.
In the embodiment of the present invention, the operation plane may be a plane where an action point is located when the operation body performs the first action, and the plane where the action point is located is parallel to the first reference plane. An action point of the first action can be captured by an automatic focusing technology of the image acquisition device, and a focusing distance relative to the operation plane obtained when the action point is captured can be obtained by using the automatic focusing technology. Thus, when the focal point of the image acquisition device is located in the first reference plane, the focal distance of the image acquisition device is the second distance.
If the focus of the image capturing device is not located in the first reference plane, it is necessary to first obtain a third distance of the image capturing device in the first direction according to the first reference plane and a fourth distance of the image capturing device in the first direction according to the operation plane, and then obtain the second distance according to the third distance and the fourth distance, which may specifically be:
in the first direction, if the plane where the focus of the image acquisition device is located between the first reference plane and the operation plane, the second distance is the sum of the third distance and the fourth distance; if the first reference plane is located between the plane of the focal point of the image acquisition device and the operation plane, the second distance is the difference between the fourth distance and the third distance.
105: and determining the position relation between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance.
When the operation plane where the first action is located is determined to be the virtual plane where the first action is displayed according to the first distance and the second distance, the first action is indicated to be directly operated on the virtual plane, and at the moment, the fact that the operation body collides with the virtual plane can be judged; when the operation plane where the first action is located is determined not to be the displayed virtual plane according to the first distance and the second distance, it is indicated that the first action is not directly operated on the virtual plane, and at this time, it can be determined that the operation body does not collide with the virtual plane.
The process of determining the position relationship between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance may be: when the difference between the first distance and the second distance is within a first preset range, determining the operation plane where the first action is located as a virtual plane where the display is located; and when the difference between the first distance and the second distance is not in the first preset range, determining that the operation plane where the first action is located is not the virtual plane where the display is located.
Or comparing the first distance with the second distance to obtain a comparison result; when the comparison result shows that the first distance and the second distance are the same, determining the operation plane where the first action is located as a virtual plane where the display is located; and when the comparison result shows that the first distance is different from the second distance, determining that the operation plane where the first action is located is not the virtual plane where the display is located.
As can be seen from the foregoing technical solutions, the information processing method provided in the embodiments of the present invention may first obtain a first distance between a virtual plane where the display is located and a first reference plane in a first direction, obtain a second distance between an operation plane where the first action is located and the first reference plane in the first plane, and determine a position relationship between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance, thereby completing collision detection between the operation body and the virtual plane. Compared with the prior art, the embodiment of the invention simplifies the collision detection mode by the way of performing collision detection through the first distance and the second distance, and reduces the data amount used in the collision detection process, thereby improving the detection efficiency.
Referring to fig. 4, another flowchart of an information processing method according to an embodiment of the present invention is shown, which illustrates how to execute a first instruction corresponding to a first action on display content when an operation plane where the first action is located is determined to be a virtual plane where the first action is displayed, and may include the following steps:
101: display content is displayed using head-mounted display technology.
102: and acquiring a first distance between the displayed virtual plane and a first reference plane in the first direction, wherein the axis of the first reference plane in the first direction is parallel to the axis of the image acquisition device in the first direction.
103: a first motion of an operation body in a first direction is acquired by an image acquisition device.
104: and acquiring a second distance of the operation plane where the first action is located according to the first reference plane in the first direction.
105: according to the first distance and the second distance, determining the position relation between the operation plane where the first action is located and the virtual plane where the display is located
106: when the operation plane where the first action is located is determined to be the virtual plane where the first action is located, a first image of the operation body when the first action is executed is obtained through the image acquisition device.
When the operation plane where the first action is located is determined to be the virtual plane where the first action is located, it is indicated that the first action is directly operated on the virtual plane, and at this time, the operation corresponding to the first action needs to be performed on the display content displayed on the virtual plane.
107: and identifying the first image by applying an image identification technology to obtain the position of the operation body formed on the virtual plane when the first action is executed. In the embodiment of the present invention, the first action of the operation body is an operation to display content at a certain position in the virtual plane, and therefore, after the first image is acquired, it is necessary to identify the first image by applying an image recognition technique, and obtain the position of the operation body formed on the virtual plane when the first action is performed, by using the correspondence between the pixel and the coordinate in the first image.
108: and acquiring and executing a first instruction corresponding to the first action based on a position formed on the virtual plane during the first action.
Wherein the display content to be currently operated may be determined based on the position formed on the virtual plane at the time of the first action, and then the first instruction is executed on the determined display content. As shown in fig. 5, the display content with the virtual control is displayed on the virtual plane, and the first action is a click operation performed on a certain virtual control, so that the currently operated virtual control can be determined based on a position formed on the virtual plane during the first action, and then a first instruction identical to the instruction of the virtual control in the first action is further obtained, and the first instruction is executed on the currently displayed display content.
109: and displaying the display content obtained after the first instruction is executed on the virtual plane by using the head-mounted display technology.
Corresponding to the above method embodiment, an embodiment of the present invention further provides an information processing apparatus, which is applied to a wearable electronic device, where the wearable electronic device includes an image capturing device, and a schematic structural diagram of the information processing apparatus 10 is shown in fig. 6, and includes: a display unit 11, a first acquisition unit 12, an acquisition unit 13, a second acquisition unit 14 and a determination unit 15.
And a display unit 11 for displaying the display content using a head-mounted display technology. The head-mounted display technology can project display contents on a virtual plane in a first direction by utilizing the principle of optical reflection, and the height of the virtual plane can be parallel to human eyes, so that the human eyes can look up the display contents. In the embodiment of the present invention, the display unit 11 may display the display content using a head-mounted display technology through the HUD in the wearable electronic device.
The first obtaining unit 12 is configured to obtain a first distance between a virtual plane where the display is located and a first reference plane in a first direction, where an axis of the first reference plane in the first direction is parallel to an axis of the image capturing device in the first direction.
In the embodiment of the present invention, the first reference plane is a reference plane for obtaining the first distance and the second distance, which may be a plane where a mirror surface of the wearable glasses shown in fig. 2 is located, and which plane is specifically selected may refer to the setting of the HUD3 in the wearable electronic device shown in fig. 2.
For example, when the focus of the HUD3 is located on the plane of the mirror surface of the wearable glasses shown in fig. 2, the plane of the mirror surface can be directly used as the first reference plane; when the focus of the HUD3 is not in the plane of the mirror surface of the wearable glasses shown in fig. 2, the plane of the focus of the HUD3 is taken as the first reference plane, and the plane of the focus is parallel to the plane of the mirror surface.
Thus when the plane in which the focus of the HUD3 is located is the first reference plane, the first distance may be the focal length of a virtual plane formed when head-mounted display technology is used, the focal length being the distance from the optical center of the lens to the focal point of the light collection, the virtual plane may be formed at the focal length position, and thus the focal length used when the head-mounted display technology is used to form the virtual plane may be the first distance. When displayed using head-mounted display technology using the HUD3, the first distance is then the focal distance set when designing the HUD 3.
If the plane on which the focus of the HUD3 is located is not the first reference plane, a fifth distance between the focus of the HUD3 and the first reference plane may be first obtained, and then the first distance may be obtained by determining a calculation method between the fifth distance and the focal length according to a positional relationship between the plane on which the focus of the HUD3 is located and the first reference plane. The method specifically comprises the following steps:
in a first direction, the first distance is the sum of the fifth distance and the focal length when the plane of focus of the HUD3 is between the first reference plane and the virtual plane; the first distance is the difference between the fifth distance and the focal length when the first reference plane is located between the plane of the focus of the HUD3 and the virtual plane.
The acquisition unit 13 is configured to acquire a first motion of the operation body in the first direction by the image acquisition device.
The second obtaining unit 14 is configured to obtain a second distance between the operation plane where the first action is located and the first reference plane in the first direction. In the embodiment of the present invention, the operation plane may be a plane where an action point is located when the operation body performs the first action, and the plane where the action point is located is parallel to the first reference plane. An action point of the first action can be captured by an automatic focusing technology of the image acquisition device, and a focusing distance relative to the operation plane obtained when the action point is captured can be obtained by using the automatic focusing technology. Thus, when the focal point of the image acquisition device is located in the first reference plane, the focal distance of the image acquisition device is the second distance.
If the focus of the image capturing device is not located in the first reference plane, it is necessary to first obtain a third distance of the image capturing device in the first direction according to the first reference plane and a fourth distance of the image capturing device in the first direction according to the operation plane, and then obtain the second distance according to the third distance and the fourth distance, which may specifically be:
in the first direction, if the plane where the focus of the image acquisition device is located between the first reference plane and the operation plane, the second distance is the sum of the third distance and the fourth distance; if the first reference plane is located between the plane of the focal point of the image acquisition device and the operation plane, the second distance is the difference between the fourth distance and the third distance.
The determining unit 15 is configured to determine a position relationship between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance.
When the operation plane where the first action is located is determined to be the virtual plane where the first action is displayed according to the first distance and the second distance, the first action is indicated to be directly operated on the virtual plane, and at the moment, the fact that the operation body collides with the virtual plane can be judged; when the operation plane where the first action is located is determined not to be the displayed virtual plane according to the first distance and the second distance, it is indicated that the first action is not directly operated on the virtual plane, and at this time, it can be determined that the operation body does not collide with the virtual plane.
The process of determining, by the determining unit 15, the position relationship between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance may be: when the difference between the first distance and the second distance is within a first preset range, determining the operation plane where the first action is located as a virtual plane where the display is located; and when the difference between the first distance and the second distance is not in the first preset range, determining that the operation plane where the first action is located is not the virtual plane where the display is located.
Or comparing the first distance with the second distance to obtain a comparison result; when the comparison result shows that the first distance and the second distance are the same, determining the operation plane where the first action is located as a virtual plane where the display is located; and when the comparison result shows that the first distance is different from the second distance, determining that the operation plane where the first action is located is not the virtual plane where the display is located.
As can be seen from the foregoing technical solutions, the information processing apparatus provided in the embodiments of the present invention may first obtain a first distance between a virtual plane where the display is located and a first reference plane in a first direction, obtain a second distance between an operation plane where the first action is located and the first reference plane in the first plane, and determine a position relationship between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance, so as to complete collision detection between the operation body and the virtual plane. Compared with the prior art, the embodiment of the invention simplifies the collision detection mode by the way of performing collision detection through the first distance and the second distance, and reduces the data amount used in the collision detection process, thereby improving the detection efficiency.
Referring to fig. 7, which shows another schematic structural diagram of an information processing apparatus according to an embodiment of the present invention, on the basis of fig. 6, the information processing apparatus further includes: a third acquisition unit 16, a recognition unit 17 and an execution unit 18. Wherein,
and a third acquiring unit 16, configured to acquire, by the image acquisition device, a first image of the operating body when the operating body performs the first action. When the operation plane where the first action is located is determined to be the virtual plane where the first action is located, it is indicated that the first action is directly operated on the virtual plane, and at this time, the operation corresponding to the first action needs to be performed on the display content displayed on the virtual plane.
And the recognition unit 17 is used for recognizing the first image by applying an image recognition technology to obtain the position of the operation body formed on the virtual plane when the first action is executed. In the embodiment of the present invention, the first action of the operation body is an operation to display content at a certain position in the virtual plane, and therefore, after the first image is acquired, it is necessary to identify the first image by applying an image recognition technique, and obtain the position of the operation body formed on the virtual plane when the first action is performed, by using the correspondence between the pixel and the coordinate in the first image.
And the execution unit 18 is used for acquiring and executing a first instruction corresponding to the first action based on the position formed on the virtual plane during the first action.
The execution unit 18 may determine the display content to be currently operated based on the position formed on the virtual plane at the time of the first action, and then execute the first instruction on the determined display content. As shown in fig. 5, the display content with the virtual control is displayed on the virtual plane, and the first action is a click operation performed on a certain virtual control, so that the currently operated virtual control can be determined based on a position formed on the virtual plane during the first action, and then a first instruction identical to the instruction of the virtual control in the first action is further obtained, and the first instruction is executed on the currently displayed display content. The display content resulting after execution of the first instruction may be displayed on the virtual plane by the display unit 11 using head-mounted display technology.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An information processing method is applied to wearable electronic equipment, the wearable electronic equipment comprises an image acquisition device, and the information processing method comprises the following steps:
displaying the display content using head-mounted display technology;
acquiring a first distance between the virtual plane where the display is located and a first reference plane in a first direction, wherein the axis of the first reference plane in the first direction is parallel to the axis of the image acquisition device in the first direction;
acquiring a first action of an operation body in a first direction through the image acquisition device;
acquiring a second distance of the operation plane where the first action is located in the first direction according to the first reference plane;
and determining the position relation between the operation plane of the first action and the virtual plane of the display according to the first distance and the second distance so as to determine whether the operation plane of the first action is the virtual plane of the display according to the position relation.
2. The method according to claim 1, wherein when it is determined that the operation plane in which the first action is performed is the virtual plane in which the display is performed, the method further comprises:
acquiring a first image of the operation body when the operation body executes the first action through the image acquisition device;
identifying the first image by applying an image identification technology to obtain a position formed by the operation body on the virtual plane when the first action is executed;
acquiring and executing a first instruction corresponding to the first action based on a position formed on the virtual plane during the first action;
displaying display content obtained after the first instruction is executed on the virtual plane by using head-mounted display technology.
3. The method of claim 1, wherein obtaining a second distance of the operation plane in which the first action is located in the first direction from the first reference plane comprises:
the focusing distance relative to the operation plane obtained by using the automatic focusing technology of the image acquisition device is the second distance.
4. The method of claim 1, further comprising: acquiring a third distance of the image acquisition device in the first direction according to the first reference plane;
acquiring a fourth distance of the image acquisition device in the first direction according to the operation plane, wherein the fourth distance is a focusing distance relative to the virtual plane, which is obtained by using an automatic focusing technology of the image acquisition device;
the sum of the fourth distance and the third distance is the second distance, or the difference between the fourth distance and the third distance is the second distance.
5. The method according to claim 3 or 4, wherein determining the position relationship between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance comprises:
comparing the first distance with the second distance to obtain a comparison result;
and when the comparison result shows that the first distance and the second distance are the same, determining that the operation plane where the first action is located is the virtual plane where the display is located.
6. An information processing device is applied to wearable electronic equipment, wearable electronic equipment includes image acquisition device, its characterized in that, information processing device includes:
a display unit for displaying display content using head-mounted display technology;
the first obtaining unit is used for obtaining a first distance between the virtual plane where the display is located and a first reference plane in a first direction, and an axis of the first reference plane in the first direction is parallel to an axis of the image acquisition device in the first direction;
the acquisition unit is used for acquiring a first action of the operation body in a first direction through the image acquisition device;
the second obtaining unit is used for obtaining a second distance of the operation plane where the first action is located in the first direction according to the first reference plane;
and the determining unit is used for determining the position relationship between the operation plane where the first action is located and the virtual plane where the display is located according to the first distance and the second distance so as to determine whether the operation plane where the first action is located is the virtual plane where the display is located according to the position relationship.
7. The apparatus of claim 6, further comprising:
the third acquisition unit is used for acquiring a first image of the operation body when the operation body executes the first action through the image acquisition device;
the recognition unit is used for recognizing the first image by applying an image recognition technology to obtain a position formed by the operation body on the virtual plane when the first action is executed;
the execution unit is used for acquiring and executing a first instruction corresponding to the first action based on a position formed on the virtual plane during the first action;
the display unit is further used for displaying display content obtained after the first instruction is executed on the virtual plane by using a head-mounted display technology.
8. The apparatus according to claim 6, wherein the second obtaining unit obtains the second distance of the operation plane where the first action is located in the first direction according to the first reference plane comprises: the focusing distance relative to the operation plane obtained by using the automatic focusing technology of the image acquisition device is the second distance.
9. The apparatus according to claim 6, wherein the second obtaining unit obtains the second distance of the operation plane where the first action is located in the first direction according to the first reference plane comprises: and acquiring a third distance of the image acquisition device in the first direction according to the first reference plane and acquiring a fourth distance of the image acquisition device in the first direction according to the operation plane, wherein the sum of the fourth distance and the third distance is used as the second distance, or the difference between the fourth distance and the third distance is used as the second distance, and the fourth distance is a focusing distance relative to the virtual plane, which is obtained by using an automatic focusing technology of the image acquisition device.
10. The apparatus according to claim 8 or 9, wherein the determining unit determines a positional relationship between an operation plane in which the first action is located and a virtual plane in which the display is located according to the first distance and the second distance, and includes:
comparing the first distance with the second distance to obtain a comparison result;
and when the comparison result shows that the first distance and the second distance are the same, determining that the operation plane where the first action is located is the virtual plane where the display is located.
CN201410486659.2A 2014-09-22 2014-09-22 A kind of information processing method and device Active CN104199556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410486659.2A CN104199556B (en) 2014-09-22 2014-09-22 A kind of information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410486659.2A CN104199556B (en) 2014-09-22 2014-09-22 A kind of information processing method and device

Publications (2)

Publication Number Publication Date
CN104199556A CN104199556A (en) 2014-12-10
CN104199556B true CN104199556B (en) 2018-01-16

Family

ID=52084857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410486659.2A Active CN104199556B (en) 2014-09-22 2014-09-22 A kind of information processing method and device

Country Status (1)

Country Link
CN (1) CN104199556B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105204625B (en) * 2015-08-31 2018-05-08 小米科技有限责任公司 Safety protection method and device in reality-virtualizing game
US20180033177A1 (en) * 2016-08-01 2018-02-01 Samsung Electronics Co., Ltd. Method for image display and electronic device supporting the same
CN106951087B (en) * 2017-03-27 2020-02-21 联想(北京)有限公司 Interaction method and device based on virtual interaction plane
CN111766937B (en) * 2019-04-02 2024-05-28 广东虚拟现实科技有限公司 Virtual content interaction method and device, terminal equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206380A (en) * 2006-12-21 2008-06-25 亚洲光学股份有限公司 Method for measuring distance by digital camera
CN102207770A (en) * 2010-03-30 2011-10-05 哈曼贝克自动系统股份有限公司 Vehicle user interface unit for a vehicle electronic device
CN103713387A (en) * 2012-09-29 2014-04-09 联想(北京)有限公司 Electronic device and acquisition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2377147A (en) * 2001-06-27 2002-12-31 Nokia Corp A virtual reality user interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206380A (en) * 2006-12-21 2008-06-25 亚洲光学股份有限公司 Method for measuring distance by digital camera
CN102207770A (en) * 2010-03-30 2011-10-05 哈曼贝克自动系统股份有限公司 Vehicle user interface unit for a vehicle electronic device
CN103713387A (en) * 2012-09-29 2014-04-09 联想(北京)有限公司 Electronic device and acquisition method

Also Published As

Publication number Publication date
CN104199556A (en) 2014-12-10

Similar Documents

Publication Publication Date Title
KR102093198B1 (en) Method and apparatus for user interface using gaze interaction
CN107111753B (en) Gaze detection offset for gaze tracking models
US10453235B2 (en) Image processing apparatus displaying image of virtual object and method of displaying the same
CN105493154B (en) System and method for determining the range of the plane in augmented reality environment
US20190102956A1 (en) Information processing apparatus, information processing method, and program
US9489574B2 (en) Apparatus and method for enhancing user recognition
EP3101624A1 (en) Image processing method and image processing device
CN113168733A (en) Virtual glasses try-on system and method
CN106127788B (en) A kind of vision barrier-avoiding method and device
US10825217B2 (en) Image bounding shape using 3D environment representation
EP3230825B1 (en) Device for and method of corneal imaging
CN108140255B (en) The method and system of reflecting surface in scene for identification
CN112102389A (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
US10254831B2 (en) System and method for detecting a gaze of a viewer
CN104199556B (en) A kind of information processing method and device
JP5776323B2 (en) Corneal reflection determination program, corneal reflection determination device, and corneal reflection determination method
US20200242335A1 (en) Information processing apparatus, information processing method, and recording medium
KR20160046399A (en) Method and Apparatus for Generation Texture Map, and Database Generation Method
CN113093907B (en) Man-machine interaction method, system, equipment and storage medium
KR101308184B1 (en) Augmented reality apparatus and method of windows form
US20230267632A1 (en) Stereo matching method and image processing device performing same
CN106502379B (en) A kind of acquisition methods of exchange method and interactive system, relative depth
JP5694471B2 (en) Eye search method, eye state detection device and eye search device using the method
CN117372475A (en) Eyeball tracking method and electronic equipment
CN113963355B (en) OCR character recognition method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant